Nov 25 07:15:34 crc systemd[1]: Starting Kubernetes Kubelet... Nov 25 07:15:34 crc restorecon[4760]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:34 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 07:15:35 crc restorecon[4760]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 07:15:35 crc restorecon[4760]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 25 07:15:36 crc kubenswrapper[5043]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 07:15:36 crc kubenswrapper[5043]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 25 07:15:36 crc kubenswrapper[5043]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 07:15:36 crc kubenswrapper[5043]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 07:15:36 crc kubenswrapper[5043]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 25 07:15:36 crc kubenswrapper[5043]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.735387 5043 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743717 5043 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743760 5043 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743774 5043 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743788 5043 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743802 5043 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743812 5043 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743823 5043 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743833 5043 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743843 5043 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743855 5043 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743865 5043 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743875 5043 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743884 5043 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743896 5043 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743906 5043 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743915 5043 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743923 5043 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743930 5043 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743938 5043 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743947 5043 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743954 5043 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743962 5043 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743970 5043 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743978 5043 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743986 5043 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.743994 5043 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744005 5043 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744015 5043 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744025 5043 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744033 5043 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744041 5043 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744055 5043 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744065 5043 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744074 5043 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744083 5043 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744093 5043 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744102 5043 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744112 5043 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744121 5043 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744129 5043 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744140 5043 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744149 5043 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744158 5043 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744166 5043 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744178 5043 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744186 5043 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744193 5043 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744201 5043 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744208 5043 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744217 5043 feature_gate.go:330] unrecognized feature gate: Example Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744224 5043 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744233 5043 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744241 5043 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744248 5043 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744256 5043 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744266 5043 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744275 5043 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744282 5043 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744290 5043 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744298 5043 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744306 5043 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744318 5043 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744325 5043 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744333 5043 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744340 5043 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744348 5043 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744356 5043 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744364 5043 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744372 5043 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744380 5043 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.744388 5043 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745512 5043 flags.go:64] FLAG: --address="0.0.0.0" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745536 5043 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745551 5043 flags.go:64] FLAG: --anonymous-auth="true" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745563 5043 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745576 5043 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745587 5043 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745631 5043 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745645 5043 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745655 5043 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745664 5043 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745675 5043 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745685 5043 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745694 5043 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745703 5043 flags.go:64] FLAG: --cgroup-root="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745712 5043 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745721 5043 flags.go:64] FLAG: --client-ca-file="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745730 5043 flags.go:64] FLAG: --cloud-config="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745749 5043 flags.go:64] FLAG: --cloud-provider="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745758 5043 flags.go:64] FLAG: --cluster-dns="[]" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745771 5043 flags.go:64] FLAG: --cluster-domain="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745781 5043 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745790 5043 flags.go:64] FLAG: --config-dir="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745799 5043 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745809 5043 flags.go:64] FLAG: --container-log-max-files="5" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745820 5043 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745830 5043 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745840 5043 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745849 5043 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745858 5043 flags.go:64] FLAG: --contention-profiling="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745868 5043 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745876 5043 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745886 5043 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745896 5043 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745907 5043 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745917 5043 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745926 5043 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745935 5043 flags.go:64] FLAG: --enable-load-reader="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745944 5043 flags.go:64] FLAG: --enable-server="true" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745953 5043 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745965 5043 flags.go:64] FLAG: --event-burst="100" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745974 5043 flags.go:64] FLAG: --event-qps="50" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745983 5043 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.745993 5043 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746002 5043 flags.go:64] FLAG: --eviction-hard="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746012 5043 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746021 5043 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746030 5043 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746040 5043 flags.go:64] FLAG: --eviction-soft="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746049 5043 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746059 5043 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746068 5043 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746077 5043 flags.go:64] FLAG: --experimental-mounter-path="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746087 5043 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746096 5043 flags.go:64] FLAG: --fail-swap-on="true" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746105 5043 flags.go:64] FLAG: --feature-gates="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746123 5043 flags.go:64] FLAG: --file-check-frequency="20s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746132 5043 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746142 5043 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746151 5043 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746160 5043 flags.go:64] FLAG: --healthz-port="10248" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746170 5043 flags.go:64] FLAG: --help="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746179 5043 flags.go:64] FLAG: --hostname-override="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746187 5043 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746198 5043 flags.go:64] FLAG: --http-check-frequency="20s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746206 5043 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746215 5043 flags.go:64] FLAG: --image-credential-provider-config="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746224 5043 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746233 5043 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746248 5043 flags.go:64] FLAG: --image-service-endpoint="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746258 5043 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746266 5043 flags.go:64] FLAG: --kube-api-burst="100" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746275 5043 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746285 5043 flags.go:64] FLAG: --kube-api-qps="50" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746294 5043 flags.go:64] FLAG: --kube-reserved="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746303 5043 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746312 5043 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746321 5043 flags.go:64] FLAG: --kubelet-cgroups="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746330 5043 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746339 5043 flags.go:64] FLAG: --lock-file="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746348 5043 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746357 5043 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746366 5043 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746380 5043 flags.go:64] FLAG: --log-json-split-stream="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746389 5043 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746398 5043 flags.go:64] FLAG: --log-text-split-stream="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746407 5043 flags.go:64] FLAG: --logging-format="text" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746416 5043 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746425 5043 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746434 5043 flags.go:64] FLAG: --manifest-url="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746445 5043 flags.go:64] FLAG: --manifest-url-header="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746457 5043 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746466 5043 flags.go:64] FLAG: --max-open-files="1000000" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746476 5043 flags.go:64] FLAG: --max-pods="110" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746485 5043 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746495 5043 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746504 5043 flags.go:64] FLAG: --memory-manager-policy="None" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746513 5043 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746523 5043 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746532 5043 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746541 5043 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746561 5043 flags.go:64] FLAG: --node-status-max-images="50" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746570 5043 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746579 5043 flags.go:64] FLAG: --oom-score-adj="-999" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746589 5043 flags.go:64] FLAG: --pod-cidr="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746623 5043 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746637 5043 flags.go:64] FLAG: --pod-manifest-path="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746647 5043 flags.go:64] FLAG: --pod-max-pids="-1" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746656 5043 flags.go:64] FLAG: --pods-per-core="0" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746665 5043 flags.go:64] FLAG: --port="10250" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746675 5043 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746683 5043 flags.go:64] FLAG: --provider-id="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746692 5043 flags.go:64] FLAG: --qos-reserved="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746701 5043 flags.go:64] FLAG: --read-only-port="10255" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746710 5043 flags.go:64] FLAG: --register-node="true" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746719 5043 flags.go:64] FLAG: --register-schedulable="true" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746729 5043 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746745 5043 flags.go:64] FLAG: --registry-burst="10" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746754 5043 flags.go:64] FLAG: --registry-qps="5" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746764 5043 flags.go:64] FLAG: --reserved-cpus="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746774 5043 flags.go:64] FLAG: --reserved-memory="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746785 5043 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746794 5043 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746804 5043 flags.go:64] FLAG: --rotate-certificates="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746827 5043 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746836 5043 flags.go:64] FLAG: --runonce="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746846 5043 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746855 5043 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746866 5043 flags.go:64] FLAG: --seccomp-default="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746884 5043 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746894 5043 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746904 5043 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746913 5043 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746922 5043 flags.go:64] FLAG: --storage-driver-password="root" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746930 5043 flags.go:64] FLAG: --storage-driver-secure="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746939 5043 flags.go:64] FLAG: --storage-driver-table="stats" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746949 5043 flags.go:64] FLAG: --storage-driver-user="root" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746957 5043 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746967 5043 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746976 5043 flags.go:64] FLAG: --system-cgroups="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.746985 5043 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.747000 5043 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.747009 5043 flags.go:64] FLAG: --tls-cert-file="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.747018 5043 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.747029 5043 flags.go:64] FLAG: --tls-min-version="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.747074 5043 flags.go:64] FLAG: --tls-private-key-file="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.747084 5043 flags.go:64] FLAG: --topology-manager-policy="none" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.747094 5043 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.747104 5043 flags.go:64] FLAG: --topology-manager-scope="container" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.747113 5043 flags.go:64] FLAG: --v="2" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.747124 5043 flags.go:64] FLAG: --version="false" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.747135 5043 flags.go:64] FLAG: --vmodule="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.747147 5043 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.747156 5043 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747472 5043 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747495 5043 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747507 5043 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747519 5043 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747534 5043 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747547 5043 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747559 5043 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747583 5043 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747594 5043 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747651 5043 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747662 5043 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747673 5043 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747683 5043 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747693 5043 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747701 5043 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747711 5043 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747738 5043 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747748 5043 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747758 5043 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747767 5043 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747775 5043 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747783 5043 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747792 5043 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747801 5043 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747810 5043 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747820 5043 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747829 5043 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747837 5043 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747845 5043 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747853 5043 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747861 5043 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747869 5043 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747877 5043 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747885 5043 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747893 5043 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747900 5043 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747908 5043 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747916 5043 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747924 5043 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747931 5043 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747939 5043 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747951 5043 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747958 5043 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747966 5043 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747973 5043 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747981 5043 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747989 5043 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.747996 5043 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748004 5043 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748011 5043 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748020 5043 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748027 5043 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748035 5043 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748042 5043 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748050 5043 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748059 5043 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748066 5043 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748074 5043 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748082 5043 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748091 5043 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748099 5043 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748107 5043 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748114 5043 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748122 5043 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748134 5043 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748143 5043 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748151 5043 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748159 5043 feature_gate.go:330] unrecognized feature gate: Example Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748167 5043 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748175 5043 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.748182 5043 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.748206 5043 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.760220 5043 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.760260 5043 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760384 5043 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760397 5043 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760406 5043 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760414 5043 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760423 5043 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760434 5043 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760444 5043 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760454 5043 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760462 5043 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760472 5043 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760481 5043 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760488 5043 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760497 5043 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760505 5043 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760515 5043 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760530 5043 feature_gate.go:330] unrecognized feature gate: Example Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760538 5043 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760547 5043 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760556 5043 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760564 5043 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760572 5043 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760580 5043 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760588 5043 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760632 5043 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760653 5043 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760666 5043 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760676 5043 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760685 5043 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760694 5043 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760702 5043 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760711 5043 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760719 5043 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760727 5043 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760735 5043 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760745 5043 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760755 5043 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760764 5043 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760773 5043 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760782 5043 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760791 5043 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760800 5043 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760809 5043 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760818 5043 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760826 5043 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760834 5043 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760842 5043 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760850 5043 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760858 5043 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760867 5043 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760874 5043 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760884 5043 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760892 5043 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760899 5043 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760907 5043 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760915 5043 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760923 5043 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760930 5043 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760938 5043 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760946 5043 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760954 5043 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760962 5043 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760971 5043 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760979 5043 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760990 5043 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.760999 5043 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761008 5043 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761016 5043 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761024 5043 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761031 5043 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761039 5043 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761047 5043 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.761060 5043 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761278 5043 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761291 5043 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761299 5043 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761308 5043 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761316 5043 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761324 5043 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761332 5043 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761339 5043 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761347 5043 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761355 5043 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761363 5043 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761371 5043 feature_gate.go:330] unrecognized feature gate: Example Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761379 5043 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761389 5043 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761400 5043 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761409 5043 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761418 5043 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761428 5043 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761438 5043 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761447 5043 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761456 5043 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761464 5043 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761472 5043 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761480 5043 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761488 5043 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761496 5043 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761504 5043 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761511 5043 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761519 5043 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761527 5043 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761535 5043 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761543 5043 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761550 5043 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761558 5043 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761566 5043 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761574 5043 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761581 5043 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761589 5043 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761596 5043 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761636 5043 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761648 5043 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761658 5043 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761665 5043 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761673 5043 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761684 5043 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761693 5043 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761701 5043 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761709 5043 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761717 5043 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761724 5043 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761733 5043 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761741 5043 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761749 5043 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761756 5043 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761764 5043 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761771 5043 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761782 5043 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761790 5043 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761799 5043 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761808 5043 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761816 5043 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761824 5043 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761832 5043 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761840 5043 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761848 5043 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761855 5043 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761863 5043 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761874 5043 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761883 5043 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761893 5043 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.761902 5043 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.761916 5043 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.762157 5043 server.go:940] "Client rotation is on, will bootstrap in background" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.769173 5043 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.769338 5043 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.771017 5043 server.go:997] "Starting client certificate rotation" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.771083 5043 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.773003 5043 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-22 15:47:47.235254773 +0000 UTC Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.773119 5043 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.793697 5043 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.795834 5043 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 07:15:36 crc kubenswrapper[5043]: E1125 07:15:36.797841 5043 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.812363 5043 log.go:25] "Validated CRI v1 runtime API" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.857817 5043 log.go:25] "Validated CRI v1 image API" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.860259 5043 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.864802 5043 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-25-07-10-37-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.864834 5043 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.880343 5043 manager.go:217] Machine: {Timestamp:2025-11-25 07:15:36.877565994 +0000 UTC m=+1.045761725 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3726d918-ef60-45bd-8631-a23c2ab917f8 BootID:7373f16d-4ee4-443d-bb12-9af926cc5ac2 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:73:30:09 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:73:30:09 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:3d:57:92 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b1:1c:88 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:65:bc:d7 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ed:0d:c5 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:99:06:72 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6e:57:a0:be:57:79 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:3e:b2:fe:1f:e0:9b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.880836 5043 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.881003 5043 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.881244 5043 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.881403 5043 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.881433 5043 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.881657 5043 topology_manager.go:138] "Creating topology manager with none policy" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.881667 5043 container_manager_linux.go:303] "Creating device plugin manager" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.882172 5043 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.882204 5043 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.882355 5043 state_mem.go:36] "Initialized new in-memory state store" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.882425 5043 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.887082 5043 kubelet.go:418] "Attempting to sync node with API server" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.887125 5043 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.887156 5043 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.887167 5043 kubelet.go:324] "Adding apiserver pod source" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.887179 5043 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.890740 5043 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.892222 5043 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.892235 5043 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 25 07:15:36 crc kubenswrapper[5043]: E1125 07:15:36.892372 5043 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.892463 5043 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 25 07:15:36 crc kubenswrapper[5043]: E1125 07:15:36.892664 5043 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.893891 5043 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.895284 5043 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.895312 5043 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.895320 5043 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.895327 5043 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.895337 5043 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.895344 5043 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.895351 5043 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.895361 5043 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.895369 5043 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.895377 5043 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.895410 5043 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.895418 5043 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.896421 5043 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.896842 5043 server.go:1280] "Started kubelet" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.897549 5043 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.897827 5043 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.897834 5043 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.898204 5043 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 25 07:15:36 crc systemd[1]: Started Kubernetes Kubelet. Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.901310 5043 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.901387 5043 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.901650 5043 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:48:10.122300389 +0000 UTC Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.901730 5043 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 144h32m33.22057493s for next certificate rotation Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.902226 5043 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.902239 5043 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 25 07:15:36 crc kubenswrapper[5043]: E1125 07:15:36.902323 5043 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.902394 5043 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.902909 5043 factory.go:55] Registering systemd factory Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.902945 5043 factory.go:221] Registration of the systemd container factory successfully Nov 25 07:15:36 crc kubenswrapper[5043]: E1125 07:15:36.902977 5043 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="200ms" Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.902990 5043 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 25 07:15:36 crc kubenswrapper[5043]: E1125 07:15:36.903075 5043 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.903446 5043 factory.go:153] Registering CRI-O factory Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.903486 5043 factory.go:221] Registration of the crio container factory successfully Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.903643 5043 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.903687 5043 factory.go:103] Registering Raw factory Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.903718 5043 manager.go:1196] Started watching for new ooms in manager Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.904016 5043 server.go:460] "Adding debug handlers to kubelet server" Nov 25 07:15:36 crc kubenswrapper[5043]: E1125 07:15:36.903340 5043 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b2e9e66022f32 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 07:15:36.896810802 +0000 UTC m=+1.065006513,LastTimestamp:2025-11-25 07:15:36.896810802 +0000 UTC m=+1.065006513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.905494 5043 manager.go:319] Starting recovery of all containers Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.917857 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.917937 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.917954 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.917966 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.917981 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.917993 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.918008 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.918022 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.918038 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.918051 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.918090 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.918101 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.918112 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.918150 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.918164 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.918176 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.918196 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.918207 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.918219 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.918232 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920319 5043 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920360 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920377 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920388 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920400 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920410 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920453 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920464 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920477 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920491 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920500 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920510 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920520 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920533 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920582 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920621 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920639 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920655 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920670 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920680 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920692 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920704 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920716 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920731 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920745 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920758 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920801 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920811 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920822 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920832 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920842 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920851 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920861 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920877 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920901 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920912 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920923 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920936 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920946 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920955 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920965 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920975 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920985 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.920993 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921002 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921011 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921021 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921030 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921040 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921050 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921059 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921068 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921085 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921095 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921109 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921122 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921134 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921149 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921160 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921170 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921180 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921190 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921200 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921209 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921219 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921234 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921246 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921257 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921269 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921282 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921295 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921308 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921318 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921331 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921345 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921359 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921371 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921381 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921391 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921400 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921411 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921422 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921432 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921446 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921464 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921483 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921496 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921508 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921519 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921531 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921544 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921557 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921571 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921583 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921594 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921625 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921635 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921647 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921658 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921667 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921675 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921684 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921696 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921709 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921721 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921737 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921750 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921761 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921772 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921782 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921791 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921801 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921812 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921822 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921831 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921841 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921856 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921869 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921884 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921895 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921908 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921922 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921938 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921951 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921965 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921976 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.921987 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922000 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922013 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922027 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922040 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922055 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922067 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922080 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922093 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922104 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922115 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922131 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922143 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922156 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922168 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922181 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922194 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922206 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922217 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922231 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922242 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922255 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922273 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922286 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922298 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922312 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922324 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922340 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922352 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922364 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922377 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922388 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922399 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922411 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922425 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922440 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922455 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922466 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922477 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922488 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922504 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922517 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922530 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922545 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922556 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922570 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922583 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922596 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922642 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922658 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922670 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922682 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922693 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922709 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922734 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922746 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922758 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922771 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922785 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922801 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922815 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922828 5043 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922839 5043 reconstruct.go:97] "Volume reconstruction finished" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.922850 5043 reconciler.go:26] "Reconciler: start to sync state" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.932927 5043 manager.go:324] Recovery completed Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.946330 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.948928 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.948981 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.948994 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.949793 5043 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.949814 5043 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.949835 5043 state_mem.go:36] "Initialized new in-memory state store" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.958284 5043 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.960126 5043 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.961032 5043 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.961357 5043 kubelet.go:2335] "Starting kubelet main sync loop" Nov 25 07:15:36 crc kubenswrapper[5043]: E1125 07:15:36.961515 5043 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 25 07:15:36 crc kubenswrapper[5043]: W1125 07:15:36.965270 5043 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 25 07:15:36 crc kubenswrapper[5043]: E1125 07:15:36.965381 5043 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.974114 5043 policy_none.go:49] "None policy: Start" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.974919 5043 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 25 07:15:36 crc kubenswrapper[5043]: I1125 07:15:36.974941 5043 state_mem.go:35] "Initializing new in-memory state store" Nov 25 07:15:37 crc kubenswrapper[5043]: E1125 07:15:37.002895 5043 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.031878 5043 manager.go:334] "Starting Device Plugin manager" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.031933 5043 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.031948 5043 server.go:79] "Starting device plugin registration server" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.035889 5043 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.035962 5043 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.036862 5043 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.036942 5043 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.036949 5043 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 25 07:15:37 crc kubenswrapper[5043]: E1125 07:15:37.042261 5043 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.062648 5043 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.062769 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.063951 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.063997 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.064010 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.064175 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.064379 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.064418 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.065223 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.065242 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.065250 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.065278 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.065265 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.065377 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.065480 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.065523 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.065625 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.066316 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.066333 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.066341 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.066411 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.066644 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.066692 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.066877 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.067034 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.067117 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.067144 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.067333 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.067350 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.067358 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.067171 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.067385 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.067863 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.067880 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.067910 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.068772 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.068788 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.068796 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.068797 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.068818 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.068830 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.069000 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.069025 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.070702 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.070727 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.070737 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:37 crc kubenswrapper[5043]: E1125 07:15:37.103955 5043 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="400ms" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.124646 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.124680 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.124697 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.124712 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.124726 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.124740 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.124754 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.124767 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.124781 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.124795 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.124809 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.124824 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.124840 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.124855 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.124870 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.137277 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.138211 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.138241 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.138259 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.138287 5043 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 07:15:37 crc kubenswrapper[5043]: E1125 07:15:37.138642 5043 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.226461 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.226867 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.226944 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.227064 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.227165 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.227391 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.227500 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.227675 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.227591 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.227781 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.227969 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.228133 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.228231 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.228041 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.227879 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.227851 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.228422 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.228349 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.228538 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.228675 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.228777 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.228866 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.229005 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.229104 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.229171 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.229205 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.229234 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.229332 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.229389 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.229741 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.338819 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.340493 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.340562 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.340574 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.340644 5043 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 07:15:37 crc kubenswrapper[5043]: E1125 07:15:37.341294 5043 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.412250 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.419420 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.443828 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: W1125 07:15:37.462705 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e782e7298f2777bd493740952e285047f2edaf66c3d4069aacebcdd9263c0cf6 WatchSource:0}: Error finding container e782e7298f2777bd493740952e285047f2edaf66c3d4069aacebcdd9263c0cf6: Status 404 returned error can't find the container with id e782e7298f2777bd493740952e285047f2edaf66c3d4069aacebcdd9263c0cf6 Nov 25 07:15:37 crc kubenswrapper[5043]: W1125 07:15:37.464058 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2783fdb0ab0eb5a5efb73d6d21dcd5c80744e6b6e4af23f38a223ee0d684e1e7 WatchSource:0}: Error finding container 2783fdb0ab0eb5a5efb73d6d21dcd5c80744e6b6e4af23f38a223ee0d684e1e7: Status 404 returned error can't find the container with id 2783fdb0ab0eb5a5efb73d6d21dcd5c80744e6b6e4af23f38a223ee0d684e1e7 Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.465485 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: W1125 07:15:37.470360 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-c4ec2034fb426f683ede8333994ff4d13395cbbc5c03c6ca9a64e9d7cfaabbc7 WatchSource:0}: Error finding container c4ec2034fb426f683ede8333994ff4d13395cbbc5c03c6ca9a64e9d7cfaabbc7: Status 404 returned error can't find the container with id c4ec2034fb426f683ede8333994ff4d13395cbbc5c03c6ca9a64e9d7cfaabbc7 Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.471198 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 07:15:37 crc kubenswrapper[5043]: W1125 07:15:37.481724 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-5e40ae49406244bdcb0944a40a6a343827eb35f0cc6033697edf16ddb763a9cb WatchSource:0}: Error finding container 5e40ae49406244bdcb0944a40a6a343827eb35f0cc6033697edf16ddb763a9cb: Status 404 returned error can't find the container with id 5e40ae49406244bdcb0944a40a6a343827eb35f0cc6033697edf16ddb763a9cb Nov 25 07:15:37 crc kubenswrapper[5043]: W1125 07:15:37.499262 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-a73b28b64b2a4a9f0f068c1bda1176be1f6307aa7cf9a63a5764002c8a789630 WatchSource:0}: Error finding container a73b28b64b2a4a9f0f068c1bda1176be1f6307aa7cf9a63a5764002c8a789630: Status 404 returned error can't find the container with id a73b28b64b2a4a9f0f068c1bda1176be1f6307aa7cf9a63a5764002c8a789630 Nov 25 07:15:37 crc kubenswrapper[5043]: E1125 07:15:37.505510 5043 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="800ms" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.741844 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.743345 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.743386 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.743399 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.743425 5043 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 07:15:37 crc kubenswrapper[5043]: E1125 07:15:37.744001 5043 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Nov 25 07:15:37 crc kubenswrapper[5043]: W1125 07:15:37.808017 5043 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 25 07:15:37 crc kubenswrapper[5043]: E1125 07:15:37.808087 5043 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.899068 5043 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.966085 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5e40ae49406244bdcb0944a40a6a343827eb35f0cc6033697edf16ddb763a9cb"} Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.967101 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c4ec2034fb426f683ede8333994ff4d13395cbbc5c03c6ca9a64e9d7cfaabbc7"} Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.968225 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e782e7298f2777bd493740952e285047f2edaf66c3d4069aacebcdd9263c0cf6"} Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.969156 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2783fdb0ab0eb5a5efb73d6d21dcd5c80744e6b6e4af23f38a223ee0d684e1e7"} Nov 25 07:15:37 crc kubenswrapper[5043]: I1125 07:15:37.969989 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a73b28b64b2a4a9f0f068c1bda1176be1f6307aa7cf9a63a5764002c8a789630"} Nov 25 07:15:38 crc kubenswrapper[5043]: W1125 07:15:38.020972 5043 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 25 07:15:38 crc kubenswrapper[5043]: E1125 07:15:38.021067 5043 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 25 07:15:38 crc kubenswrapper[5043]: W1125 07:15:38.034831 5043 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 25 07:15:38 crc kubenswrapper[5043]: E1125 07:15:38.034921 5043 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 25 07:15:38 crc kubenswrapper[5043]: E1125 07:15:38.307500 5043 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="1.6s" Nov 25 07:15:38 crc kubenswrapper[5043]: W1125 07:15:38.323471 5043 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 25 07:15:38 crc kubenswrapper[5043]: E1125 07:15:38.323564 5043 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.544479 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.546353 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.546423 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.546441 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.546477 5043 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 07:15:38 crc kubenswrapper[5043]: E1125 07:15:38.547110 5043 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.848466 5043 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 25 07:15:38 crc kubenswrapper[5043]: E1125 07:15:38.850384 5043 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.899388 5043 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.977697 5043 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58" exitCode=0 Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.977807 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58"} Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.977819 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.979892 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.979937 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.979949 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.981813 5043 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96" exitCode=0 Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.981940 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.982273 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96"} Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.982673 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.982695 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.982704 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.987511 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c"} Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.987568 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b"} Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.987583 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106"} Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.989144 5043 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6" exitCode=0 Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.989230 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6"} Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.989345 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.990696 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.990722 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.990736 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.993189 5043 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc" exitCode=0 Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.993275 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.993264 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc"} Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.994080 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.994116 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:38 crc kubenswrapper[5043]: I1125 07:15:38.994128 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:39 crc kubenswrapper[5043]: I1125 07:15:38.999875 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:39 crc kubenswrapper[5043]: I1125 07:15:39.001097 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:39 crc kubenswrapper[5043]: I1125 07:15:39.001141 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:39 crc kubenswrapper[5043]: I1125 07:15:39.001154 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:39 crc kubenswrapper[5043]: W1125 07:15:39.862844 5043 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 25 07:15:39 crc kubenswrapper[5043]: E1125 07:15:39.863209 5043 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 25 07:15:39 crc kubenswrapper[5043]: I1125 07:15:39.898731 5043 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 25 07:15:39 crc kubenswrapper[5043]: E1125 07:15:39.908893 5043 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="3.2s" Nov 25 07:15:39 crc kubenswrapper[5043]: W1125 07:15:39.950483 5043 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 25 07:15:39 crc kubenswrapper[5043]: E1125 07:15:39.950561 5043 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.000403 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f5604ac4dec090e082d1843bc46f0857aad493c97e1d91208a938e7405333a67"} Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.000449 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f2869a4db622ee8d96a52f7c058914b01302bbeac8b81ed67aa9c87f77a7f7d0"} Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.000462 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bb1448111cb3e3b27389baafd33293fcb690b89e0f54007afba41778c91cb8cf"} Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.000467 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.001447 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.001475 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.001486 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.003996 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a"} Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.004039 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.004645 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.004666 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.004676 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.005801 5043 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c" exitCode=0 Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.005829 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c"} Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.005959 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.006892 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.006938 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.006948 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.008626 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982"} Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.008646 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df"} Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.008670 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915"} Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.008681 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f"} Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.018288 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"faaf975385c9f583d850c769c4b634f17f0e0b358fee121274a50c149726bf5c"} Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.018385 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.019780 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.019801 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.019811 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.148034 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.150902 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.150960 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.150975 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.151010 5043 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 07:15:40 crc kubenswrapper[5043]: E1125 07:15:40.151556 5043 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.162:6443: connect: connection refused" node="crc" Nov 25 07:15:40 crc kubenswrapper[5043]: I1125 07:15:40.264913 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:15:40 crc kubenswrapper[5043]: W1125 07:15:40.340629 5043 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 25 07:15:40 crc kubenswrapper[5043]: E1125 07:15:40.340701 5043 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 25 07:15:40 crc kubenswrapper[5043]: W1125 07:15:40.573409 5043 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.162:6443: connect: connection refused Nov 25 07:15:40 crc kubenswrapper[5043]: E1125 07:15:40.573495 5043 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.162:6443: connect: connection refused" logger="UnhandledError" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.025749 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9f421d40736e780d17d73cbfccda5aff8ccb7d63e4e412806c57b6fdf0063c1e"} Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.025990 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.027317 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.027345 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.027354 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.028394 5043 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf" exitCode=0 Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.028475 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.028491 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.028556 5043 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.028650 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.029086 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf"} Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.029214 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.029933 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.029976 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.029989 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.030840 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.030877 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.030890 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.030890 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.030914 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.030926 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.030935 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.030956 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.030960 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:41 crc kubenswrapper[5043]: I1125 07:15:41.133500 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:15:42 crc kubenswrapper[5043]: I1125 07:15:42.034025 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f"} Nov 25 07:15:42 crc kubenswrapper[5043]: I1125 07:15:42.034076 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9"} Nov 25 07:15:42 crc kubenswrapper[5043]: I1125 07:15:42.034088 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:42 crc kubenswrapper[5043]: I1125 07:15:42.034088 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:42 crc kubenswrapper[5043]: I1125 07:15:42.034093 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418"} Nov 25 07:15:42 crc kubenswrapper[5043]: I1125 07:15:42.034889 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:42 crc kubenswrapper[5043]: I1125 07:15:42.034929 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:42 crc kubenswrapper[5043]: I1125 07:15:42.034947 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:42 crc kubenswrapper[5043]: I1125 07:15:42.034998 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:42 crc kubenswrapper[5043]: I1125 07:15:42.035048 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:42 crc kubenswrapper[5043]: I1125 07:15:42.035071 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:42 crc kubenswrapper[5043]: I1125 07:15:42.457762 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:15:43 crc kubenswrapper[5043]: I1125 07:15:43.046106 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf"} Nov 25 07:15:43 crc kubenswrapper[5043]: I1125 07:15:43.046185 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243"} Nov 25 07:15:43 crc kubenswrapper[5043]: I1125 07:15:43.046186 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:43 crc kubenswrapper[5043]: I1125 07:15:43.046254 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:43 crc kubenswrapper[5043]: I1125 07:15:43.048312 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:43 crc kubenswrapper[5043]: I1125 07:15:43.048381 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:43 crc kubenswrapper[5043]: I1125 07:15:43.048412 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:43 crc kubenswrapper[5043]: I1125 07:15:43.048496 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:43 crc kubenswrapper[5043]: I1125 07:15:43.048552 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:43 crc kubenswrapper[5043]: I1125 07:15:43.048571 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:43 crc kubenswrapper[5043]: I1125 07:15:43.067416 5043 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 25 07:15:43 crc kubenswrapper[5043]: I1125 07:15:43.352223 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:43 crc kubenswrapper[5043]: I1125 07:15:43.354003 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:43 crc kubenswrapper[5043]: I1125 07:15:43.354036 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:43 crc kubenswrapper[5043]: I1125 07:15:43.354045 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:43 crc kubenswrapper[5043]: I1125 07:15:43.354065 5043 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 07:15:44 crc kubenswrapper[5043]: I1125 07:15:44.049698 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:44 crc kubenswrapper[5043]: I1125 07:15:44.049773 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:44 crc kubenswrapper[5043]: I1125 07:15:44.052037 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:44 crc kubenswrapper[5043]: I1125 07:15:44.052114 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:44 crc kubenswrapper[5043]: I1125 07:15:44.052143 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:44 crc kubenswrapper[5043]: I1125 07:15:44.052161 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:44 crc kubenswrapper[5043]: I1125 07:15:44.052201 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:44 crc kubenswrapper[5043]: I1125 07:15:44.052226 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:44 crc kubenswrapper[5043]: I1125 07:15:44.439129 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.051926 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.053114 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.053192 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.053221 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.693862 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.694081 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.695176 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.695222 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.695238 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.808507 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.808666 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.809747 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.809802 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.809823 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.895158 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.895396 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.897277 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.897339 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:45 crc kubenswrapper[5043]: I1125 07:15:45.897359 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:46 crc kubenswrapper[5043]: I1125 07:15:46.875206 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 25 07:15:46 crc kubenswrapper[5043]: I1125 07:15:46.875444 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:46 crc kubenswrapper[5043]: I1125 07:15:46.876810 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:46 crc kubenswrapper[5043]: I1125 07:15:46.876840 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:46 crc kubenswrapper[5043]: I1125 07:15:46.876848 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:47 crc kubenswrapper[5043]: E1125 07:15:47.042510 5043 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 07:15:47 crc kubenswrapper[5043]: I1125 07:15:47.477215 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:15:47 crc kubenswrapper[5043]: I1125 07:15:47.477587 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:47 crc kubenswrapper[5043]: I1125 07:15:47.479767 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:47 crc kubenswrapper[5043]: I1125 07:15:47.479865 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:47 crc kubenswrapper[5043]: I1125 07:15:47.479890 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:47 crc kubenswrapper[5043]: I1125 07:15:47.522405 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:15:47 crc kubenswrapper[5043]: I1125 07:15:47.531439 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:15:48 crc kubenswrapper[5043]: I1125 07:15:48.057676 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:48 crc kubenswrapper[5043]: I1125 07:15:48.059536 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:48 crc kubenswrapper[5043]: I1125 07:15:48.059594 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:48 crc kubenswrapper[5043]: I1125 07:15:48.059676 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:48 crc kubenswrapper[5043]: I1125 07:15:48.065279 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:15:48 crc kubenswrapper[5043]: I1125 07:15:48.694316 5043 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 07:15:48 crc kubenswrapper[5043]: I1125 07:15:48.694444 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 07:15:49 crc kubenswrapper[5043]: I1125 07:15:49.060359 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:49 crc kubenswrapper[5043]: I1125 07:15:49.061869 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:49 crc kubenswrapper[5043]: I1125 07:15:49.061937 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:49 crc kubenswrapper[5043]: I1125 07:15:49.061976 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:50 crc kubenswrapper[5043]: I1125 07:15:50.062657 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:50 crc kubenswrapper[5043]: I1125 07:15:50.063710 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:50 crc kubenswrapper[5043]: I1125 07:15:50.063764 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:50 crc kubenswrapper[5043]: I1125 07:15:50.063783 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:50 crc kubenswrapper[5043]: I1125 07:15:50.899298 5043 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 25 07:15:51 crc kubenswrapper[5043]: I1125 07:15:51.067689 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 07:15:51 crc kubenswrapper[5043]: I1125 07:15:51.070160 5043 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9f421d40736e780d17d73cbfccda5aff8ccb7d63e4e412806c57b6fdf0063c1e" exitCode=255 Nov 25 07:15:51 crc kubenswrapper[5043]: I1125 07:15:51.070214 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9f421d40736e780d17d73cbfccda5aff8ccb7d63e4e412806c57b6fdf0063c1e"} Nov 25 07:15:51 crc kubenswrapper[5043]: I1125 07:15:51.070468 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:51 crc kubenswrapper[5043]: I1125 07:15:51.071669 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:51 crc kubenswrapper[5043]: I1125 07:15:51.071784 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:51 crc kubenswrapper[5043]: I1125 07:15:51.071859 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:51 crc kubenswrapper[5043]: I1125 07:15:51.072456 5043 scope.go:117] "RemoveContainer" containerID="9f421d40736e780d17d73cbfccda5aff8ccb7d63e4e412806c57b6fdf0063c1e" Nov 25 07:15:51 crc kubenswrapper[5043]: I1125 07:15:51.319409 5043 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 25 07:15:51 crc kubenswrapper[5043]: I1125 07:15:51.319469 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 07:15:51 crc kubenswrapper[5043]: I1125 07:15:51.324087 5043 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 25 07:15:51 crc kubenswrapper[5043]: I1125 07:15:51.324130 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 07:15:52 crc kubenswrapper[5043]: I1125 07:15:52.075525 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 07:15:52 crc kubenswrapper[5043]: I1125 07:15:52.077160 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054"} Nov 25 07:15:52 crc kubenswrapper[5043]: I1125 07:15:52.077330 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:52 crc kubenswrapper[5043]: I1125 07:15:52.079032 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:52 crc kubenswrapper[5043]: I1125 07:15:52.079071 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:52 crc kubenswrapper[5043]: I1125 07:15:52.079129 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:54 crc kubenswrapper[5043]: I1125 07:15:54.456974 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:15:54 crc kubenswrapper[5043]: I1125 07:15:54.457219 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:54 crc kubenswrapper[5043]: I1125 07:15:54.457297 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:15:54 crc kubenswrapper[5043]: I1125 07:15:54.459056 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:54 crc kubenswrapper[5043]: I1125 07:15:54.459172 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:54 crc kubenswrapper[5043]: I1125 07:15:54.459207 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:54 crc kubenswrapper[5043]: I1125 07:15:54.464044 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:15:55 crc kubenswrapper[5043]: I1125 07:15:55.085518 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:55 crc kubenswrapper[5043]: I1125 07:15:55.086969 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:55 crc kubenswrapper[5043]: I1125 07:15:55.087067 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:55 crc kubenswrapper[5043]: I1125 07:15:55.087096 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.087821 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.088924 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.088976 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.088993 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:15:56 crc kubenswrapper[5043]: E1125 07:15:56.315077 5043 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.320630 5043 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.320753 5043 trace.go:236] Trace[1100562249]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 07:15:45.910) (total time: 10409ms): Nov 25 07:15:56 crc kubenswrapper[5043]: Trace[1100562249]: ---"Objects listed" error: 10409ms (07:15:56.319) Nov 25 07:15:56 crc kubenswrapper[5043]: Trace[1100562249]: [10.409302418s] [10.409302418s] END Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.320778 5043 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.320817 5043 trace.go:236] Trace[543847683]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 07:15:44.533) (total time: 11787ms): Nov 25 07:15:56 crc kubenswrapper[5043]: Trace[543847683]: ---"Objects listed" error: 11787ms (07:15:56.320) Nov 25 07:15:56 crc kubenswrapper[5043]: Trace[543847683]: [11.787688792s] [11.787688792s] END Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.320837 5043 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.321449 5043 trace.go:236] Trace[77327916]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 07:15:45.035) (total time: 11286ms): Nov 25 07:15:56 crc kubenswrapper[5043]: Trace[77327916]: ---"Objects listed" error: 11286ms (07:15:56.321) Nov 25 07:15:56 crc kubenswrapper[5043]: Trace[77327916]: [11.286227759s] [11.286227759s] END Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.321475 5043 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.323558 5043 trace.go:236] Trace[162842436]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 07:15:44.817) (total time: 11505ms): Nov 25 07:15:56 crc kubenswrapper[5043]: Trace[162842436]: ---"Objects listed" error: 11505ms (07:15:56.323) Nov 25 07:15:56 crc kubenswrapper[5043]: Trace[162842436]: [11.50569201s] [11.50569201s] END Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.323644 5043 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 07:15:56 crc kubenswrapper[5043]: E1125 07:15:56.325372 5043 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.330789 5043 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.914652 5043 apiserver.go:52] "Watching apiserver" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.919526 5043 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.920202 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.921354 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.921475 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:15:56 crc kubenswrapper[5043]: E1125 07:15:56.921556 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.921679 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.921917 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.921954 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:15:56 crc kubenswrapper[5043]: E1125 07:15:56.921470 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.922421 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 07:15:56 crc kubenswrapper[5043]: E1125 07:15:56.922775 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.925340 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.925584 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.925656 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.925756 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.925870 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.925918 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.926034 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.926100 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.926405 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.984792 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:56 crc kubenswrapper[5043]: I1125 07:15:56.986305 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.002396 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.002730 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.003537 5043 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.010789 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.012793 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.016883 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.026651 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.026688 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.026704 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.026722 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.026745 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027203 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027193 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027260 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027288 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027305 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027327 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027349 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027366 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027373 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027372 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027383 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027549 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027569 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027584 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027622 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027639 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027644 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027655 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027687 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027701 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027723 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027753 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027776 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027861 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027898 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027922 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027942 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027958 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027973 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027988 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028004 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028020 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028039 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028054 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028069 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028084 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028100 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028116 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028135 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028150 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028172 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028189 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028207 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028222 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028241 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028258 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028274 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028289 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028307 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028324 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028338 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028357 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028374 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028391 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028407 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028437 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028456 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028471 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028486 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028504 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028519 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028534 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028550 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028566 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028582 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028598 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028638 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028653 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028669 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028687 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028704 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028719 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028735 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028750 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028767 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028783 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028798 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028813 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028830 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028846 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028863 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028880 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028950 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028967 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028982 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028998 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029013 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029029 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029048 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029063 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029077 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029114 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029132 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029149 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029166 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029183 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029199 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029217 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029234 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029250 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029267 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029282 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029298 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029315 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029333 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029351 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029366 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029381 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029396 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029413 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029431 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029448 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029464 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029480 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029496 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029514 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029533 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029549 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029566 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029585 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029619 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029640 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029656 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029672 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029721 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029737 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029752 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029768 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029784 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029799 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029814 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029833 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029848 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029864 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029880 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029897 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029915 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029931 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029948 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029964 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029979 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029997 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030018 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030035 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030051 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030067 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030083 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030100 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030116 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030131 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030148 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030164 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030181 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030198 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030214 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030229 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030247 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030263 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030279 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030297 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030314 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030331 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030347 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030363 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030380 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030396 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030413 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030428 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030478 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030496 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030513 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030531 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030549 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030566 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030582 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030636 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030656 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030672 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030691 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030708 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030726 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030742 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030759 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030776 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030794 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030811 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030828 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030846 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030862 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030878 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030895 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030915 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030934 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030952 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030969 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030989 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031029 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031052 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031072 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031091 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031111 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031134 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031154 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031174 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031193 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031210 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031228 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031245 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031264 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031282 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031335 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031346 5043 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031357 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031367 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031377 5043 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.027872 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028074 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028574 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028621 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.028891 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029144 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029153 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029192 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029399 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029458 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.029906 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030223 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030227 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030529 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.030827 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031095 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.031464 5043 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.031963 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 07:15:57.531942293 +0000 UTC m=+21.700138014 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.032026 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.032164 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.032226 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.032378 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.032425 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.032506 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.032560 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.032796 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.033012 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.033036 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.033050 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.033294 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.033420 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.031703 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.033474 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.033682 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.033790 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.033876 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.033896 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.033927 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.034056 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.034226 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.034355 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.034488 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.034738 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.035803 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.035891 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.035945 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.036169 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.036177 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.036333 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.036493 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.037149 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.037388 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.037531 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.037748 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.037914 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.038106 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.038147 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.038311 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.038462 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.038545 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.038590 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.038700 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:15:57.538681749 +0000 UTC m=+21.706877460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.038697 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.038867 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.039191 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.039483 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.039685 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.039775 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.039783 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.039967 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.039977 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.040076 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.040126 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.040175 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.040377 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.040492 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.040641 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.040749 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.040848 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.040972 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.041010 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.040982 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.041189 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.041574 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.041081 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.042117 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.042139 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.042240 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.042376 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.042492 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.042764 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.044024 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.044204 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.044270 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.045752 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.045765 5043 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.046423 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.047076 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.047361 5043 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.047473 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.047507 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.047626 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.048113 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.048272 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.048467 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.048765 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.049295 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.049357 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.049195 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 07:15:57.547412329 +0000 UTC m=+21.715608160 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.049820 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.050030 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.050140 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.050505 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.050765 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.050812 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.050954 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.051040 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.051132 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.049824 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.051581 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.051692 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.051960 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.052203 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.052416 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.052619 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.052815 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.052985 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.053243 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.053359 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.053649 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.053812 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.054028 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.054052 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.054084 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.054151 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.054286 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.054498 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.054742 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.055857 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.057051 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.060122 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.060170 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.060503 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.060529 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.060759 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.061457 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.064865 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.065080 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.065430 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.065455 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.065471 5043 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.065531 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 07:15:57.565510917 +0000 UTC m=+21.733706738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.065652 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.065789 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.068038 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.070398 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.071739 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.073698 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.074460 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.074478 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.074490 5043 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.074543 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.074565 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 07:15:57.574547256 +0000 UTC m=+21.742743077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.075154 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.076384 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.078627 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.080786 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.081410 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.082805 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.083139 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.086140 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.086932 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.087082 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.087113 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.087142 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.087398 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.088312 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.088392 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.088457 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.088914 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.089153 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.089190 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.089875 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.089882 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.089904 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.090005 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.090529 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.090883 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.091008 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.091231 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.091263 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.091306 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.091488 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.091580 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.091650 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.091677 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.091953 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.092002 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.092054 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.092124 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.092169 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.092304 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.093061 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.093153 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.093214 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.093228 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.093339 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.093361 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.093448 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.093597 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.093751 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.093772 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.100118 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.105348 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.105592 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.110721 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.112535 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.122694 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.124456 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.124870 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.131765 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.131866 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.131818 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132049 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132165 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132180 5043 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132189 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132185 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132197 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132249 5043 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132263 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132276 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132288 5043 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132304 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132316 5043 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132327 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132339 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132352 5043 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132364 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132377 5043 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132391 5043 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132403 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132414 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132426 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132438 5043 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132450 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132462 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132474 5043 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132486 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132498 5043 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132510 5043 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132528 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132544 5043 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132561 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132574 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132587 5043 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132623 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132636 5043 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132647 5043 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132659 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132671 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132684 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132697 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132709 5043 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132721 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132733 5043 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132745 5043 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132756 5043 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132768 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132780 5043 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132792 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132804 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132815 5043 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132827 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132855 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132866 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132878 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132894 5043 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132910 5043 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132925 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132941 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132955 5043 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132965 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132977 5043 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.132989 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133013 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133024 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133035 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133046 5043 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133070 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133083 5043 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133094 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133106 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133118 5043 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133133 5043 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133144 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133155 5043 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133167 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133179 5043 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133193 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133204 5043 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133215 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133226 5043 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133239 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133250 5043 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133262 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133273 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133285 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133296 5043 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133307 5043 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133320 5043 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133331 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133342 5043 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133363 5043 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133374 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133386 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133397 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133409 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133419 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133432 5043 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133443 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133455 5043 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133466 5043 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133478 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133490 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133501 5043 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133516 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133528 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133541 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133553 5043 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133566 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133578 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133589 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133601 5043 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133629 5043 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133641 5043 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133652 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133664 5043 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133676 5043 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133690 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133701 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133713 5043 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133724 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133736 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133773 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133785 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133797 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133809 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133820 5043 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133832 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133844 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133855 5043 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133866 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133877 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133893 5043 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133904 5043 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133915 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133927 5043 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133940 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133953 5043 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133966 5043 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133978 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.133991 5043 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134003 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134015 5043 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134027 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134038 5043 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134050 5043 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134062 5043 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134074 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134085 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134098 5043 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134110 5043 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134122 5043 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134135 5043 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134146 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134157 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134169 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134180 5043 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134193 5043 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134205 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134216 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134229 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134240 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134252 5043 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134264 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134275 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134287 5043 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134299 5043 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134311 5043 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134327 5043 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134340 5043 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134362 5043 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134374 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134385 5043 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134396 5043 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134407 5043 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134418 5043 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134430 5043 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134441 5043 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134453 5043 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134465 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134476 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134491 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134503 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134515 5043 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134527 5043 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134538 5043 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134551 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134562 5043 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134574 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134586 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134597 5043 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.134672 5043 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.139097 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.147396 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.154807 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.163059 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.177500 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.188774 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.240628 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.254020 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 07:15:57 crc kubenswrapper[5043]: W1125 07:15:57.254342 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-f66631ce7fb24810eb3b8d09c51564b4a6fc061325db8d9662dc3c5f7a50a6a0 WatchSource:0}: Error finding container f66631ce7fb24810eb3b8d09c51564b4a6fc061325db8d9662dc3c5f7a50a6a0: Status 404 returned error can't find the container with id f66631ce7fb24810eb3b8d09c51564b4a6fc061325db8d9662dc3c5f7a50a6a0 Nov 25 07:15:57 crc kubenswrapper[5043]: W1125 07:15:57.268740 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-abe7d08df52f93430ea01ef98c18544e379aadfa61e8acaae1c59ac404b10813 WatchSource:0}: Error finding container abe7d08df52f93430ea01ef98c18544e379aadfa61e8acaae1c59ac404b10813: Status 404 returned error can't find the container with id abe7d08df52f93430ea01ef98c18544e379aadfa61e8acaae1c59ac404b10813 Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.296430 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 07:15:57 crc kubenswrapper[5043]: W1125 07:15:57.328784 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-b1ad6a9a9446a3801f9ee0b496386f4b5ff7139083c9cbf9e4c0b60a12e6f797 WatchSource:0}: Error finding container b1ad6a9a9446a3801f9ee0b496386f4b5ff7139083c9cbf9e4c0b60a12e6f797: Status 404 returned error can't find the container with id b1ad6a9a9446a3801f9ee0b496386f4b5ff7139083c9cbf9e4c0b60a12e6f797 Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.537003 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.537122 5043 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.537164 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 07:15:58.537152686 +0000 UTC m=+22.705348407 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.637495 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.637621 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.637668 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.637712 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:15:58.637687053 +0000 UTC m=+22.805882794 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:15:57 crc kubenswrapper[5043]: I1125 07:15:57.637748 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.637755 5043 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.637856 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 07:15:58.637840707 +0000 UTC m=+22.806036438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.637874 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.637899 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.637913 5043 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.637923 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.637958 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 07:15:58.63793788 +0000 UTC m=+22.806133601 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.637976 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.637995 5043 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:15:57 crc kubenswrapper[5043]: E1125 07:15:57.638095 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 07:15:58.638070273 +0000 UTC m=+22.806266044 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.096349 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.097344 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.099058 5043 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054" exitCode=255 Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.099135 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054"} Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.099189 5043 scope.go:117] "RemoveContainer" containerID="9f421d40736e780d17d73cbfccda5aff8ccb7d63e4e412806c57b6fdf0063c1e" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.101201 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b1ad6a9a9446a3801f9ee0b496386f4b5ff7139083c9cbf9e4c0b60a12e6f797"} Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.102771 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13"} Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.102797 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7"} Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.102809 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"abe7d08df52f93430ea01ef98c18544e379aadfa61e8acaae1c59ac404b10813"} Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.104508 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71"} Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.104534 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f66631ce7fb24810eb3b8d09c51564b4a6fc061325db8d9662dc3c5f7a50a6a0"} Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.115106 5043 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.124760 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.125251 5043 scope.go:117] "RemoveContainer" containerID="e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054" Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.125459 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.133701 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:58Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.146776 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:58Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.156421 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:58Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.166359 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:58Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.177079 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:58Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.188754 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:58Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.197618 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:58Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.205540 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:58Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.216276 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:58Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.234001 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:58Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.251356 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:58Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.265274 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:58Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.277477 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:58Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.287237 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:58Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.300873 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:58Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.329797 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:58Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.347908 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f421d40736e780d17d73cbfccda5aff8ccb7d63e4e412806c57b6fdf0063c1e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:50Z\\\",\\\"message\\\":\\\"W1125 07:15:40.207478 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 07:15:40.208299 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764054940 cert, and key in /tmp/serving-cert-642723151/serving-signer.crt, /tmp/serving-cert-642723151/serving-signer.key\\\\nI1125 07:15:40.624095 1 observer_polling.go:159] Starting file observer\\\\nW1125 07:15:40.627664 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 07:15:40.627964 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:40.631292 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-642723151/tls.crt::/tmp/serving-cert-642723151/tls.key\\\\\\\"\\\\nF1125 07:15:50.911790 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:58Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.548395 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.548669 5043 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.548781 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:00.548751585 +0000 UTC m=+24.716947346 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.649806 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.649891 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.649918 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.649935 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.650014 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:16:00.649990811 +0000 UTC m=+24.818186532 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.650042 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.650058 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.650069 5043 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.650112 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:00.650099104 +0000 UTC m=+24.818294825 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.650113 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.650157 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.650163 5043 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.650308 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:00.650272008 +0000 UTC m=+24.818467769 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.650182 5043 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.650410 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:00.650390211 +0000 UTC m=+24.818586092 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.962117 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.962456 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.962135 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.962692 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.962117 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:15:58 crc kubenswrapper[5043]: E1125 07:15:58.962933 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.967769 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.968458 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.970267 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.971225 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.972677 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.973407 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.974203 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.975598 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.976555 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.977936 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.978696 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.980280 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.980958 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.981707 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.983138 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.983924 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.985379 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.986163 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.987073 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.988586 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.989288 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.991452 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.992346 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.994626 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.995534 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.996968 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 25 07:15:58 crc kubenswrapper[5043]: I1125 07:15:58.999404 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.000068 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.001681 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.002677 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.004515 5043 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.004856 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.008110 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.008673 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.009051 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.010186 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.010895 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.011385 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.014142 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.016518 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.017597 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.019186 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.021995 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.024219 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.025475 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.026730 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.027435 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.029575 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.031154 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.032819 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.033488 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.034222 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.035032 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.035801 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.109560 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.113205 5043 scope.go:117] "RemoveContainer" containerID="e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054" Nov 25 07:15:59 crc kubenswrapper[5043]: E1125 07:15:59.113546 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.132087 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.163668 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.179466 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.194686 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.208035 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.222166 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.237948 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.253868 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 25 07:15:59 crc kubenswrapper[5043]: I1125 07:15:59.265995 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:15:59Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.115747 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6"} Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.132519 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:00Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.151181 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:00Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.165086 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:00Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.177475 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:00Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.192250 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:00Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.206210 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:00Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.219280 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:00Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.234206 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:00Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.261531 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:00Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.567752 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:00 crc kubenswrapper[5043]: E1125 07:16:00.568136 5043 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 07:16:00 crc kubenswrapper[5043]: E1125 07:16:00.568218 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:04.568200308 +0000 UTC m=+28.736396029 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.668787 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.668881 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.668906 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.668924 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:00 crc kubenswrapper[5043]: E1125 07:16:00.668942 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:16:04.66892211 +0000 UTC m=+28.837117821 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:16:00 crc kubenswrapper[5043]: E1125 07:16:00.669021 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 07:16:00 crc kubenswrapper[5043]: E1125 07:16:00.669032 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 07:16:00 crc kubenswrapper[5043]: E1125 07:16:00.669042 5043 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:16:00 crc kubenswrapper[5043]: E1125 07:16:00.669081 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:04.669072094 +0000 UTC m=+28.837267815 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:16:00 crc kubenswrapper[5043]: E1125 07:16:00.669087 5043 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 07:16:00 crc kubenswrapper[5043]: E1125 07:16:00.669105 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 07:16:00 crc kubenswrapper[5043]: E1125 07:16:00.669197 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 07:16:00 crc kubenswrapper[5043]: E1125 07:16:00.669211 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:04.669175037 +0000 UTC m=+28.837370798 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 07:16:00 crc kubenswrapper[5043]: E1125 07:16:00.669238 5043 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:16:00 crc kubenswrapper[5043]: E1125 07:16:00.669299 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:04.66928429 +0000 UTC m=+28.837480071 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.962775 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.962824 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:00 crc kubenswrapper[5043]: I1125 07:16:00.962796 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:00 crc kubenswrapper[5043]: E1125 07:16:00.962977 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:00 crc kubenswrapper[5043]: E1125 07:16:00.963081 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:00 crc kubenswrapper[5043]: E1125 07:16:00.963187 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.636230 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.637052 5043 scope.go:117] "RemoveContainer" containerID="e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054" Nov 25 07:16:02 crc kubenswrapper[5043]: E1125 07:16:02.637211 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.726164 5043 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.728249 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.728313 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.728327 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.728428 5043 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.753166 5043 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.753623 5043 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.755149 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.755196 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.755209 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.755226 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.755238 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:02Z","lastTransitionTime":"2025-11-25T07:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:02 crc kubenswrapper[5043]: E1125 07:16:02.800695 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.804159 5043 csr.go:261] certificate signing request csr-ldw5x is approved, waiting to be issued Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.805493 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.805527 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.805538 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.805555 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.805585 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:02Z","lastTransitionTime":"2025-11-25T07:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.820108 5043 csr.go:257] certificate signing request csr-ldw5x is issued Nov 25 07:16:02 crc kubenswrapper[5043]: E1125 07:16:02.825788 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.831252 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.831291 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.831299 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.831317 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.831328 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:02Z","lastTransitionTime":"2025-11-25T07:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.844040 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-fxj72"] Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.844306 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fxj72" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.846627 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.846864 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.847247 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 07:16:02 crc kubenswrapper[5043]: E1125 07:16:02.847704 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.859785 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.860129 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.860141 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.860160 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.860171 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:02Z","lastTransitionTime":"2025-11-25T07:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.869200 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:02 crc kubenswrapper[5043]: E1125 07:16:02.879544 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.884658 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.884687 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.884696 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.884710 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.884735 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:02Z","lastTransitionTime":"2025-11-25T07:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.889041 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/07b25380-d8e4-4e3a-9f4c-01754e8b72f4-hosts-file\") pod \"node-resolver-fxj72\" (UID: \"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\") " pod="openshift-dns/node-resolver-fxj72" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.889077 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnwg9\" (UniqueName: \"kubernetes.io/projected/07b25380-d8e4-4e3a-9f4c-01754e8b72f4-kube-api-access-cnwg9\") pod \"node-resolver-fxj72\" (UID: \"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\") " pod="openshift-dns/node-resolver-fxj72" Nov 25 07:16:02 crc kubenswrapper[5043]: E1125 07:16:02.906004 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:02 crc kubenswrapper[5043]: E1125 07:16:02.906194 5043 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.910289 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.910332 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.910345 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.910364 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.910376 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:02Z","lastTransitionTime":"2025-11-25T07:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.910865 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.937816 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.949548 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-btxpx"] Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.949883 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-btxpx" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.951750 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.951996 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.952390 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.954695 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.962207 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.962218 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:02 crc kubenswrapper[5043]: E1125 07:16:02.962332 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.962344 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:02 crc kubenswrapper[5043]: E1125 07:16:02.962441 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:02 crc kubenswrapper[5043]: E1125 07:16:02.962535 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.964404 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.977681 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.989979 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.990082 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/07b25380-d8e4-4e3a-9f4c-01754e8b72f4-hosts-file\") pod \"node-resolver-fxj72\" (UID: \"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\") " pod="openshift-dns/node-resolver-fxj72" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.990115 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnwg9\" (UniqueName: \"kubernetes.io/projected/07b25380-d8e4-4e3a-9f4c-01754e8b72f4-kube-api-access-cnwg9\") pod \"node-resolver-fxj72\" (UID: \"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\") " pod="openshift-dns/node-resolver-fxj72" Nov 25 07:16:02 crc kubenswrapper[5043]: I1125 07:16:02.990197 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/07b25380-d8e4-4e3a-9f4c-01754e8b72f4-hosts-file\") pod \"node-resolver-fxj72\" (UID: \"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\") " pod="openshift-dns/node-resolver-fxj72" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.001476 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.011731 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.011761 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.011773 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.011789 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.011802 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:03Z","lastTransitionTime":"2025-11-25T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.013752 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnwg9\" (UniqueName: \"kubernetes.io/projected/07b25380-d8e4-4e3a-9f4c-01754e8b72f4-kube-api-access-cnwg9\") pod \"node-resolver-fxj72\" (UID: \"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\") " pod="openshift-dns/node-resolver-fxj72" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.014553 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.025042 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.044495 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.091358 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwqwt\" (UniqueName: \"kubernetes.io/projected/5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba-kube-api-access-jwqwt\") pod \"node-ca-btxpx\" (UID: \"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\") " pod="openshift-image-registry/node-ca-btxpx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.091460 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba-serviceca\") pod \"node-ca-btxpx\" (UID: \"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\") " pod="openshift-image-registry/node-ca-btxpx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.091486 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba-host\") pod \"node-ca-btxpx\" (UID: \"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\") " pod="openshift-image-registry/node-ca-btxpx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.096529 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.113922 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.113954 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.113963 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.113979 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.114025 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:03Z","lastTransitionTime":"2025-11-25T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.123159 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.135157 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.155529 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fxj72" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.158631 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.173533 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.186426 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.191882 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba-serviceca\") pod \"node-ca-btxpx\" (UID: \"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\") " pod="openshift-image-registry/node-ca-btxpx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.192748 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba-serviceca\") pod \"node-ca-btxpx\" (UID: \"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\") " pod="openshift-image-registry/node-ca-btxpx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.192811 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwqwt\" (UniqueName: \"kubernetes.io/projected/5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba-kube-api-access-jwqwt\") pod \"node-ca-btxpx\" (UID: \"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\") " pod="openshift-image-registry/node-ca-btxpx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.193148 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba-host\") pod \"node-ca-btxpx\" (UID: \"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\") " pod="openshift-image-registry/node-ca-btxpx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.193224 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba-host\") pod \"node-ca-btxpx\" (UID: \"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\") " pod="openshift-image-registry/node-ca-btxpx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.198299 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.210251 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwqwt\" (UniqueName: \"kubernetes.io/projected/5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba-kube-api-access-jwqwt\") pod \"node-ca-btxpx\" (UID: \"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\") " pod="openshift-image-registry/node-ca-btxpx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.216910 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.220149 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.220201 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.220213 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.220229 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.220241 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:03Z","lastTransitionTime":"2025-11-25T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.233303 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.248945 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.260118 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-btxpx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.262362 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: W1125 07:16:03.277562 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5da1f87c_5e0d_4f95_8cf0_b59a7c2273ba.slice/crio-922339c2c8695d0d4fecd262337c9a51ba5e016811e4d8fda49973f8c3a17c91 WatchSource:0}: Error finding container 922339c2c8695d0d4fecd262337c9a51ba5e016811e4d8fda49973f8c3a17c91: Status 404 returned error can't find the container with id 922339c2c8695d0d4fecd262337c9a51ba5e016811e4d8fda49973f8c3a17c91 Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.322234 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.322267 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.322278 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.322292 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.322303 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:03Z","lastTransitionTime":"2025-11-25T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.341120 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-5gnzs"] Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.341391 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-jzwnx"] Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.343078 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.346253 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-pbsfz"] Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.348006 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.348111 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.348310 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.348430 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.348878 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.348990 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.349341 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.358410 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.358487 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.358769 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.359122 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.359128 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.359272 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.359427 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.378952 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394153 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-etc-kubernetes\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394187 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6aa0c167-9335-44ce-975c-715ce1f43383-cni-binary-copy\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394204 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/01b1c815-0612-4834-85a8-4662893adcc7-cnibin\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394223 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-var-lib-cni-bin\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394239 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68xp8\" (UniqueName: \"kubernetes.io/projected/707b7a7f-020e-4719-9db9-7d1f3294b25c-kube-api-access-68xp8\") pod \"machine-config-daemon-jzwnx\" (UID: \"707b7a7f-020e-4719-9db9-7d1f3294b25c\") " pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394256 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-system-cni-dir\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394280 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-hostroot\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394295 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/01b1c815-0612-4834-85a8-4662893adcc7-cni-binary-copy\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394310 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-run-multus-certs\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394323 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-multus-cni-dir\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394336 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-run-netns\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394352 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6aa0c167-9335-44ce-975c-715ce1f43383-multus-daemon-config\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394367 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-cnibin\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394384 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/707b7a7f-020e-4719-9db9-7d1f3294b25c-proxy-tls\") pod \"machine-config-daemon-jzwnx\" (UID: \"707b7a7f-020e-4719-9db9-7d1f3294b25c\") " pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394407 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-var-lib-kubelet\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394422 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/01b1c815-0612-4834-85a8-4662893adcc7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394435 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/707b7a7f-020e-4719-9db9-7d1f3294b25c-rootfs\") pod \"machine-config-daemon-jzwnx\" (UID: \"707b7a7f-020e-4719-9db9-7d1f3294b25c\") " pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394449 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/01b1c815-0612-4834-85a8-4662893adcc7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394462 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-os-release\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394485 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/01b1c815-0612-4834-85a8-4662893adcc7-os-release\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394500 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q4w8\" (UniqueName: \"kubernetes.io/projected/01b1c815-0612-4834-85a8-4662893adcc7-kube-api-access-2q4w8\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394517 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/707b7a7f-020e-4719-9db9-7d1f3294b25c-mcd-auth-proxy-config\") pod \"machine-config-daemon-jzwnx\" (UID: \"707b7a7f-020e-4719-9db9-7d1f3294b25c\") " pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394532 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-multus-socket-dir-parent\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394545 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-run-k8s-cni-cncf-io\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394558 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-multus-conf-dir\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394570 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn4bw\" (UniqueName: \"kubernetes.io/projected/6aa0c167-9335-44ce-975c-715ce1f43383-kube-api-access-zn4bw\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394584 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/01b1c815-0612-4834-85a8-4662893adcc7-system-cni-dir\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.394617 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-var-lib-cni-multus\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.412511 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.424741 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.424773 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.424781 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.424793 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.424802 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:03Z","lastTransitionTime":"2025-11-25T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.426238 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.438437 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.457239 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.491824 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495416 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-multus-socket-dir-parent\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495450 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-run-k8s-cni-cncf-io\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495466 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-multus-conf-dir\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495488 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/707b7a7f-020e-4719-9db9-7d1f3294b25c-mcd-auth-proxy-config\") pod \"machine-config-daemon-jzwnx\" (UID: \"707b7a7f-020e-4719-9db9-7d1f3294b25c\") " pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495508 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-var-lib-cni-multus\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495523 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn4bw\" (UniqueName: \"kubernetes.io/projected/6aa0c167-9335-44ce-975c-715ce1f43383-kube-api-access-zn4bw\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495541 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/01b1c815-0612-4834-85a8-4662893adcc7-system-cni-dir\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495558 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-etc-kubernetes\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495575 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6aa0c167-9335-44ce-975c-715ce1f43383-cni-binary-copy\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495593 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/01b1c815-0612-4834-85a8-4662893adcc7-cnibin\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495629 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-system-cni-dir\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495645 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-var-lib-cni-bin\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495660 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68xp8\" (UniqueName: \"kubernetes.io/projected/707b7a7f-020e-4719-9db9-7d1f3294b25c-kube-api-access-68xp8\") pod \"machine-config-daemon-jzwnx\" (UID: \"707b7a7f-020e-4719-9db9-7d1f3294b25c\") " pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495684 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-hostroot\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495706 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/01b1c815-0612-4834-85a8-4662893adcc7-cni-binary-copy\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495722 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-run-multus-certs\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495738 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6aa0c167-9335-44ce-975c-715ce1f43383-multus-daemon-config\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495754 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-multus-cni-dir\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495769 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-run-netns\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495783 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-cnibin\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495797 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/707b7a7f-020e-4719-9db9-7d1f3294b25c-proxy-tls\") pod \"machine-config-daemon-jzwnx\" (UID: \"707b7a7f-020e-4719-9db9-7d1f3294b25c\") " pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495817 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-var-lib-kubelet\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495831 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/01b1c815-0612-4834-85a8-4662893adcc7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495846 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-os-release\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495861 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/707b7a7f-020e-4719-9db9-7d1f3294b25c-rootfs\") pod \"machine-config-daemon-jzwnx\" (UID: \"707b7a7f-020e-4719-9db9-7d1f3294b25c\") " pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495876 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/01b1c815-0612-4834-85a8-4662893adcc7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495901 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/01b1c815-0612-4834-85a8-4662893adcc7-os-release\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.495916 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q4w8\" (UniqueName: \"kubernetes.io/projected/01b1c815-0612-4834-85a8-4662893adcc7-kube-api-access-2q4w8\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.496203 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-multus-socket-dir-parent\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.496233 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-run-k8s-cni-cncf-io\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.496253 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-multus-conf-dir\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.496886 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/707b7a7f-020e-4719-9db9-7d1f3294b25c-mcd-auth-proxy-config\") pod \"machine-config-daemon-jzwnx\" (UID: \"707b7a7f-020e-4719-9db9-7d1f3294b25c\") " pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.496929 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-var-lib-cni-multus\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.497068 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/01b1c815-0612-4834-85a8-4662893adcc7-system-cni-dir\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.497119 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-etc-kubernetes\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.497737 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6aa0c167-9335-44ce-975c-715ce1f43383-cni-binary-copy\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.497789 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/01b1c815-0612-4834-85a8-4662893adcc7-cnibin\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.497895 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-system-cni-dir\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.498128 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-var-lib-cni-bin\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.498278 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-hostroot\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.498741 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/01b1c815-0612-4834-85a8-4662893adcc7-cni-binary-copy\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.498776 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-run-multus-certs\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.499201 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6aa0c167-9335-44ce-975c-715ce1f43383-multus-daemon-config\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.499257 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-multus-cni-dir\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.499285 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-run-netns\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.499313 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-cnibin\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.499676 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-os-release\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.499710 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6aa0c167-9335-44ce-975c-715ce1f43383-host-var-lib-kubelet\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.500147 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/01b1c815-0612-4834-85a8-4662893adcc7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.500511 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/707b7a7f-020e-4719-9db9-7d1f3294b25c-rootfs\") pod \"machine-config-daemon-jzwnx\" (UID: \"707b7a7f-020e-4719-9db9-7d1f3294b25c\") " pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.500538 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/01b1c815-0612-4834-85a8-4662893adcc7-os-release\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.500807 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/01b1c815-0612-4834-85a8-4662893adcc7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.501964 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/707b7a7f-020e-4719-9db9-7d1f3294b25c-proxy-tls\") pod \"machine-config-daemon-jzwnx\" (UID: \"707b7a7f-020e-4719-9db9-7d1f3294b25c\") " pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.528542 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68xp8\" (UniqueName: \"kubernetes.io/projected/707b7a7f-020e-4719-9db9-7d1f3294b25c-kube-api-access-68xp8\") pod \"machine-config-daemon-jzwnx\" (UID: \"707b7a7f-020e-4719-9db9-7d1f3294b25c\") " pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.530469 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn4bw\" (UniqueName: \"kubernetes.io/projected/6aa0c167-9335-44ce-975c-715ce1f43383-kube-api-access-zn4bw\") pod \"multus-5gnzs\" (UID: \"6aa0c167-9335-44ce-975c-715ce1f43383\") " pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.531798 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.531839 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.531850 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.531867 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.531878 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:03Z","lastTransitionTime":"2025-11-25T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.532081 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q4w8\" (UniqueName: \"kubernetes.io/projected/01b1c815-0612-4834-85a8-4662893adcc7-kube-api-access-2q4w8\") pod \"multus-additional-cni-plugins-pbsfz\" (UID: \"01b1c815-0612-4834-85a8-4662893adcc7\") " pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.537527 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.584683 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.609951 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.634132 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.634173 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.634182 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.634196 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.634206 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:03Z","lastTransitionTime":"2025-11-25T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.634529 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.651202 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.666919 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5gnzs" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.669513 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.673924 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.680183 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: W1125 07:16:03.683514 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6aa0c167_9335_44ce_975c_715ce1f43383.slice/crio-f3196c581bb2d0c27e4d3d40753d73062379199673afd0fb3369dc30817afb38 WatchSource:0}: Error finding container f3196c581bb2d0c27e4d3d40753d73062379199673afd0fb3369dc30817afb38: Status 404 returned error can't find the container with id f3196c581bb2d0c27e4d3d40753d73062379199673afd0fb3369dc30817afb38 Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.684305 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" Nov 25 07:16:03 crc kubenswrapper[5043]: W1125 07:16:03.696790 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01b1c815_0612_4834_85a8_4662893adcc7.slice/crio-db1a579060412c7f7c42d660fdefbb37410c1a710905ba94fd4ce553aba5247a WatchSource:0}: Error finding container db1a579060412c7f7c42d660fdefbb37410c1a710905ba94fd4ce553aba5247a: Status 404 returned error can't find the container with id db1a579060412c7f7c42d660fdefbb37410c1a710905ba94fd4ce553aba5247a Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.698408 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.722749 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.737503 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.737539 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.737547 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.737560 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.737569 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:03Z","lastTransitionTime":"2025-11-25T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.739999 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.753830 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.761979 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m5zz6"] Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.762691 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.768188 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.768413 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.768586 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.768729 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.769283 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.769649 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.769999 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.778815 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.800003 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.811324 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.821199 5043 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-25 07:11:02 +0000 UTC, rotation deadline is 2026-10-05 12:49:01.333128821 +0000 UTC Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.821247 5043 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7541h32m57.511884203s for next certificate rotation Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.822474 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.835168 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.840866 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.840912 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.840921 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.840933 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.840942 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:03Z","lastTransitionTime":"2025-11-25T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.847159 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.862241 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.873409 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.890688 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.900912 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.900963 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-var-lib-openvswitch\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901021 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-ovnkube-config\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901049 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrzpn\" (UniqueName: \"kubernetes.io/projected/a8785a4c-82ff-4a78-83a0-463e977df530-kube-api-access-hrzpn\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901069 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-kubelet\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901089 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-run-ovn-kubernetes\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901148 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-run-netns\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901206 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-env-overrides\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901249 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-ovn\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901337 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-node-log\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901376 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a8785a4c-82ff-4a78-83a0-463e977df530-ovn-node-metrics-cert\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901403 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-systemd-units\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901420 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-slash\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901437 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-systemd\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901455 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-openvswitch\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901479 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-cni-bin\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901493 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-cni-netd\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901511 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-log-socket\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901533 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-etc-openvswitch\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.901548 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-ovnkube-script-lib\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.913263 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.931278 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.943183 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.943225 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.943238 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.943256 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.943271 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:03Z","lastTransitionTime":"2025-11-25T07:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.944411 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.957329 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.973538 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.984709 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:03 crc kubenswrapper[5043]: I1125 07:16:03.994920 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002136 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-run-ovn-kubernetes\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002181 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-run-netns\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002203 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-env-overrides\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002230 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-ovn\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002247 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-run-ovn-kubernetes\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002280 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-node-log\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002296 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-run-netns\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002252 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-node-log\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002327 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-ovn\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002416 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a8785a4c-82ff-4a78-83a0-463e977df530-ovn-node-metrics-cert\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002461 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-systemd-units\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002490 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-slash\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002515 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-systemd\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002532 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-openvswitch\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002554 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-cni-bin\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002580 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-log-socket\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002615 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-cni-netd\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002593 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-slash\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002634 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-openvswitch\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002647 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-systemd\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002673 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-log-socket\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002650 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-etc-openvswitch\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002680 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-cni-netd\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002593 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-systemd-units\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002698 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-etc-openvswitch\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002762 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-cni-bin\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002803 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-ovnkube-script-lib\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002902 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.002951 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-var-lib-openvswitch\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.003004 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.003029 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-ovnkube-config\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.003054 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrzpn\" (UniqueName: \"kubernetes.io/projected/a8785a4c-82ff-4a78-83a0-463e977df530-kube-api-access-hrzpn\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.003062 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-var-lib-openvswitch\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.003082 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-kubelet\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.003148 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-kubelet\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.003683 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-ovnkube-config\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.003709 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-ovnkube-script-lib\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.003849 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-env-overrides\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.006879 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.007093 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a8785a4c-82ff-4a78-83a0-463e977df530-ovn-node-metrics-cert\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.019467 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrzpn\" (UniqueName: \"kubernetes.io/projected/a8785a4c-82ff-4a78-83a0-463e977df530-kube-api-access-hrzpn\") pod \"ovnkube-node-m5zz6\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.023445 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.039089 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.045990 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.046028 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.046039 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.046058 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.046072 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:04Z","lastTransitionTime":"2025-11-25T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.056340 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.069185 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.082040 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.082556 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: W1125 07:16:04.092553 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8785a4c_82ff_4a78_83a0_463e977df530.slice/crio-947a2e76ce2256473238a7415a5cbad64ee0d3874e34aed34079e323608d783d WatchSource:0}: Error finding container 947a2e76ce2256473238a7415a5cbad64ee0d3874e34aed34079e323608d783d: Status 404 returned error can't find the container with id 947a2e76ce2256473238a7415a5cbad64ee0d3874e34aed34079e323608d783d Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.097049 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.114194 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.128088 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5gnzs" event={"ID":"6aa0c167-9335-44ce-975c-715ce1f43383","Type":"ContainerStarted","Data":"c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.128131 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5gnzs" event={"ID":"6aa0c167-9335-44ce-975c-715ce1f43383","Type":"ContainerStarted","Data":"f3196c581bb2d0c27e4d3d40753d73062379199673afd0fb3369dc30817afb38"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.128856 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerStarted","Data":"947a2e76ce2256473238a7415a5cbad64ee0d3874e34aed34079e323608d783d"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.130515 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" event={"ID":"01b1c815-0612-4834-85a8-4662893adcc7","Type":"ContainerStarted","Data":"8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.130550 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" event={"ID":"01b1c815-0612-4834-85a8-4662893adcc7","Type":"ContainerStarted","Data":"db1a579060412c7f7c42d660fdefbb37410c1a710905ba94fd4ce553aba5247a"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.132202 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fxj72" event={"ID":"07b25380-d8e4-4e3a-9f4c-01754e8b72f4","Type":"ContainerStarted","Data":"12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.132244 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fxj72" event={"ID":"07b25380-d8e4-4e3a-9f4c-01754e8b72f4","Type":"ContainerStarted","Data":"f573bb8e4cc6a80081d0bb394c3201ad21eefa2fa9c82987dc64ae4abb2402f5"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.133837 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-btxpx" event={"ID":"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba","Type":"ContainerStarted","Data":"9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.133870 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-btxpx" event={"ID":"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba","Type":"ContainerStarted","Data":"922339c2c8695d0d4fecd262337c9a51ba5e016811e4d8fda49973f8c3a17c91"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.135459 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.135485 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.135498 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"0868d1f619ba8828f7062ba1cd0b29342e1d0579d452e71271fd64b0147392d4"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.141695 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.148217 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.148270 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.148281 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.148299 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.148309 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:04Z","lastTransitionTime":"2025-11-25T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.153209 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.163912 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.176242 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.197017 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.215967 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.250825 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.250861 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.250869 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.250882 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.250891 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:04Z","lastTransitionTime":"2025-11-25T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.253892 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.295429 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.335378 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.353174 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.353235 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.353251 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.353275 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.353289 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:04Z","lastTransitionTime":"2025-11-25T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.375049 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.416286 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.453928 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.455763 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.455810 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.455821 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.455839 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.455850 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:04Z","lastTransitionTime":"2025-11-25T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.512324 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.536161 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.559334 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.559384 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.559404 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.559422 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.559434 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:04Z","lastTransitionTime":"2025-11-25T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.578362 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.608694 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:04 crc kubenswrapper[5043]: E1125 07:16:04.608927 5043 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 07:16:04 crc kubenswrapper[5043]: E1125 07:16:04.609041 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:12.609008557 +0000 UTC m=+36.777204338 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.613754 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.654990 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.661940 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.661971 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.661981 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.661994 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.662003 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:04Z","lastTransitionTime":"2025-11-25T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.692887 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.709252 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.709331 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.709365 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.709384 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:04 crc kubenswrapper[5043]: E1125 07:16:04.709489 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 07:16:04 crc kubenswrapper[5043]: E1125 07:16:04.709514 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 07:16:04 crc kubenswrapper[5043]: E1125 07:16:04.709546 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 07:16:04 crc kubenswrapper[5043]: E1125 07:16:04.709562 5043 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:16:04 crc kubenswrapper[5043]: E1125 07:16:04.709548 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 07:16:04 crc kubenswrapper[5043]: E1125 07:16:04.709642 5043 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:16:04 crc kubenswrapper[5043]: E1125 07:16:04.709498 5043 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 07:16:04 crc kubenswrapper[5043]: E1125 07:16:04.709495 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:16:12.709453101 +0000 UTC m=+36.877648822 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:16:04 crc kubenswrapper[5043]: E1125 07:16:04.709721 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:12.709702668 +0000 UTC m=+36.877898459 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:16:04 crc kubenswrapper[5043]: E1125 07:16:04.709760 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:12.709729009 +0000 UTC m=+36.877924820 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:16:04 crc kubenswrapper[5043]: E1125 07:16:04.709784 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:12.70977584 +0000 UTC m=+36.877971671 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.734140 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.764113 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.764163 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.764172 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.764186 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.764197 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:04Z","lastTransitionTime":"2025-11-25T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.775651 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.821656 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.855457 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.866567 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.866630 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.866643 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.866658 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.866682 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:04Z","lastTransitionTime":"2025-11-25T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.898488 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.934281 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.961750 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.961820 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.961853 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:04 crc kubenswrapper[5043]: E1125 07:16:04.962020 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:04 crc kubenswrapper[5043]: E1125 07:16:04.962117 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:04 crc kubenswrapper[5043]: E1125 07:16:04.962199 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.968127 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.968163 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.968174 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.968188 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.968198 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:04Z","lastTransitionTime":"2025-11-25T07:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:04 crc kubenswrapper[5043]: I1125 07:16:04.973803 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.020513 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.053567 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.070042 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.070078 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.070087 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.070100 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.070126 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:05Z","lastTransitionTime":"2025-11-25T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.094203 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.139283 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.139580 5043 generic.go:334] "Generic (PLEG): container finished" podID="a8785a4c-82ff-4a78-83a0-463e977df530" containerID="2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16" exitCode=0 Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.139643 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerDied","Data":"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16"} Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.140968 5043 generic.go:334] "Generic (PLEG): container finished" podID="01b1c815-0612-4834-85a8-4662893adcc7" containerID="8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e" exitCode=0 Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.141049 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" event={"ID":"01b1c815-0612-4834-85a8-4662893adcc7","Type":"ContainerDied","Data":"8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e"} Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.173797 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.173834 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.173844 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.173864 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.173876 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:05Z","lastTransitionTime":"2025-11-25T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.174685 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.220469 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.259235 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.276477 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.276517 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.276527 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.276546 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.276558 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:05Z","lastTransitionTime":"2025-11-25T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.296391 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.335905 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.374080 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.378967 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.379014 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.379027 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.379045 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.379054 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:05Z","lastTransitionTime":"2025-11-25T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.413171 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.456019 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.481820 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.481851 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.481861 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.481876 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.481887 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:05Z","lastTransitionTime":"2025-11-25T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.502301 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.539341 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.574573 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.584135 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.584181 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.584192 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.584209 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.584220 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:05Z","lastTransitionTime":"2025-11-25T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.618441 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.653067 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.686082 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.686117 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.686127 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.686142 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.686152 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:05Z","lastTransitionTime":"2025-11-25T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.705560 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.735981 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.773662 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.788358 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.788399 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.788408 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.788422 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.788431 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:05Z","lastTransitionTime":"2025-11-25T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.890344 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.890430 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.890443 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.890459 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.890489 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:05Z","lastTransitionTime":"2025-11-25T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.993512 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.993826 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.993837 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.993855 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:05 crc kubenswrapper[5043]: I1125 07:16:05.993865 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:05Z","lastTransitionTime":"2025-11-25T07:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.096105 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.096133 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.096141 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.096155 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.096163 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:06Z","lastTransitionTime":"2025-11-25T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.148740 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerStarted","Data":"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1"} Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.148804 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerStarted","Data":"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245"} Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.148821 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerStarted","Data":"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535"} Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.148835 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerStarted","Data":"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1"} Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.150723 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" event={"ID":"01b1c815-0612-4834-85a8-4662893adcc7","Type":"ContainerStarted","Data":"adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582"} Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.167277 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.192193 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.198372 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.198395 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.198403 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.198416 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.198424 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:06Z","lastTransitionTime":"2025-11-25T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.205222 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.216976 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.231993 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.249211 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.262790 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.275912 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.286402 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.300105 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.300799 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.300833 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.300841 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.300857 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.300866 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:06Z","lastTransitionTime":"2025-11-25T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.314078 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.325129 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.341052 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.353011 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.371663 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.402689 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.402990 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.403002 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.403017 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.403027 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:06Z","lastTransitionTime":"2025-11-25T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.505800 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.505847 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.505857 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.505872 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.505883 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:06Z","lastTransitionTime":"2025-11-25T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.608266 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.608310 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.608320 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.608336 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.608351 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:06Z","lastTransitionTime":"2025-11-25T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.711522 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.711579 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.711596 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.711650 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.711667 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:06Z","lastTransitionTime":"2025-11-25T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.771493 5043 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.813974 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.814018 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.814029 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.814047 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.814059 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:06Z","lastTransitionTime":"2025-11-25T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.915989 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.916067 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.916086 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.916110 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.916128 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:06Z","lastTransitionTime":"2025-11-25T07:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.961973 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.962021 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:06 crc kubenswrapper[5043]: E1125 07:16:06.962199 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.962256 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:06 crc kubenswrapper[5043]: E1125 07:16:06.962418 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:06 crc kubenswrapper[5043]: E1125 07:16:06.962575 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.979337 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:06 crc kubenswrapper[5043]: I1125 07:16:06.992950 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.015043 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.019324 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.019380 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.019400 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.019426 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.019443 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:07Z","lastTransitionTime":"2025-11-25T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.033090 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.059843 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.123714 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.123755 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.123771 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.123791 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.123807 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:07Z","lastTransitionTime":"2025-11-25T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.124860 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.146971 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.158100 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerStarted","Data":"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57"} Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.158139 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerStarted","Data":"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f"} Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.159757 5043 generic.go:334] "Generic (PLEG): container finished" podID="01b1c815-0612-4834-85a8-4662893adcc7" containerID="adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582" exitCode=0 Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.159792 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" event={"ID":"01b1c815-0612-4834-85a8-4662893adcc7","Type":"ContainerDied","Data":"adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582"} Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.163108 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.180236 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.196727 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.213145 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.223997 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.227638 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.227690 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.227700 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.227715 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.227725 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:07Z","lastTransitionTime":"2025-11-25T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.235957 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.247504 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.262682 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.282337 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.293921 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.304761 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.316343 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.326230 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.329938 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.329963 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.329972 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.329984 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.330013 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:07Z","lastTransitionTime":"2025-11-25T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.337359 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.348677 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.359888 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.373228 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.392769 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.413914 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.432166 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.432205 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.432214 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.432229 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.432239 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:07Z","lastTransitionTime":"2025-11-25T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.458908 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.498686 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.536065 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.538073 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.538184 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.538255 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.538353 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.538512 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:07Z","lastTransitionTime":"2025-11-25T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.584977 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.642144 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.642198 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.642209 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.642231 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.642243 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:07Z","lastTransitionTime":"2025-11-25T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.745658 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.746464 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.746678 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.746835 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.746974 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:07Z","lastTransitionTime":"2025-11-25T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.851669 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.851796 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.851872 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.851905 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.852069 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:07Z","lastTransitionTime":"2025-11-25T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.955363 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.955418 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.955435 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.955498 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:07 crc kubenswrapper[5043]: I1125 07:16:07.955567 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:07Z","lastTransitionTime":"2025-11-25T07:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.057725 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.058048 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.058223 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.058366 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.058497 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:08Z","lastTransitionTime":"2025-11-25T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.164212 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.164252 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.164269 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.164290 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.164304 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:08Z","lastTransitionTime":"2025-11-25T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.167523 5043 generic.go:334] "Generic (PLEG): container finished" podID="01b1c815-0612-4834-85a8-4662893adcc7" containerID="b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e" exitCode=0 Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.167642 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" event={"ID":"01b1c815-0612-4834-85a8-4662893adcc7","Type":"ContainerDied","Data":"b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e"} Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.188968 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:08Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.207447 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:08Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.221820 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:08Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.232912 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:08Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.257642 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:08Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.269539 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.269663 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.269692 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.269718 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.269741 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:08Z","lastTransitionTime":"2025-11-25T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.272458 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:08Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.284360 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:08Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.297402 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:08Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.314330 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:08Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.332078 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:08Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.344182 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:08Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.354713 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:08Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.365179 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:08Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.371276 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.371307 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.371315 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.371330 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.371363 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:08Z","lastTransitionTime":"2025-11-25T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.375852 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:08Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.390254 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:08Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.473728 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.473812 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.473837 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.473866 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.473893 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:08Z","lastTransitionTime":"2025-11-25T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.581971 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.582019 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.582029 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.582045 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.582056 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:08Z","lastTransitionTime":"2025-11-25T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.684971 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.685057 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.685080 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.685110 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.685136 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:08Z","lastTransitionTime":"2025-11-25T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.788266 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.788309 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.788319 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.788335 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.788347 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:08Z","lastTransitionTime":"2025-11-25T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.891238 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.891660 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.891683 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.891706 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.891723 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:08Z","lastTransitionTime":"2025-11-25T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.962654 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.962762 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.962672 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:08 crc kubenswrapper[5043]: E1125 07:16:08.962836 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:08 crc kubenswrapper[5043]: E1125 07:16:08.962923 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:08 crc kubenswrapper[5043]: E1125 07:16:08.963008 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.994797 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.994846 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.994865 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.994889 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:08 crc kubenswrapper[5043]: I1125 07:16:08.994909 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:08Z","lastTransitionTime":"2025-11-25T07:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.101154 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.101226 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.101251 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.101284 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.101303 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:09Z","lastTransitionTime":"2025-11-25T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.181528 5043 generic.go:334] "Generic (PLEG): container finished" podID="01b1c815-0612-4834-85a8-4662893adcc7" containerID="cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a" exitCode=0 Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.181625 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" event={"ID":"01b1c815-0612-4834-85a8-4662893adcc7","Type":"ContainerDied","Data":"cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a"} Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.189106 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerStarted","Data":"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31"} Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.200058 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.205104 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.205165 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.205187 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.205216 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.205237 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:09Z","lastTransitionTime":"2025-11-25T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.224429 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.252744 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.303786 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.307238 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.307287 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.307300 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.307318 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.307330 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:09Z","lastTransitionTime":"2025-11-25T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.318431 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.334986 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.349420 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.361925 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.375685 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.389989 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.409636 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.409673 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.409687 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.409704 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.409717 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:09Z","lastTransitionTime":"2025-11-25T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.416348 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.429925 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.457936 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.471643 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.484324 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:09Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.512200 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.512229 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.512237 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.512250 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.512259 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:09Z","lastTransitionTime":"2025-11-25T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.614669 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.614702 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.614712 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.614725 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.614736 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:09Z","lastTransitionTime":"2025-11-25T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.716866 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.716902 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.716912 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.716925 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.716935 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:09Z","lastTransitionTime":"2025-11-25T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.820510 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.820542 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.820550 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.820562 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.820571 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:09Z","lastTransitionTime":"2025-11-25T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.927092 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.927139 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.927155 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.927169 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:09 crc kubenswrapper[5043]: I1125 07:16:09.927178 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:09Z","lastTransitionTime":"2025-11-25T07:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.030154 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.030269 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.030294 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.030327 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.030349 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:10Z","lastTransitionTime":"2025-11-25T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.133388 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.133452 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.133474 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.133504 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.133528 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:10Z","lastTransitionTime":"2025-11-25T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.236904 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.236973 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.236990 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.237012 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.237030 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:10Z","lastTransitionTime":"2025-11-25T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.339657 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.339725 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.339746 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.339783 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.339801 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:10Z","lastTransitionTime":"2025-11-25T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.442251 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.442302 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.442325 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.442347 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.442360 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:10Z","lastTransitionTime":"2025-11-25T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.544933 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.545046 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.545065 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.545086 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.545097 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:10Z","lastTransitionTime":"2025-11-25T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.648399 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.648466 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.648489 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.648519 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.648540 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:10Z","lastTransitionTime":"2025-11-25T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.764790 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.765313 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.765710 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.765878 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.766007 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:10Z","lastTransitionTime":"2025-11-25T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.868910 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.868936 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.868943 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.868970 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.869004 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:10Z","lastTransitionTime":"2025-11-25T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.962055 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.962133 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.962242 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:10 crc kubenswrapper[5043]: E1125 07:16:10.962231 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:10 crc kubenswrapper[5043]: E1125 07:16:10.962379 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:10 crc kubenswrapper[5043]: E1125 07:16:10.962582 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.971273 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.971327 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.971343 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.971362 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:10 crc kubenswrapper[5043]: I1125 07:16:10.971376 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:10Z","lastTransitionTime":"2025-11-25T07:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.074546 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.074590 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.074641 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.074665 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.074680 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:11Z","lastTransitionTime":"2025-11-25T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.179246 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.179291 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.179304 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.179324 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.179339 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:11Z","lastTransitionTime":"2025-11-25T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.205786 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerStarted","Data":"f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b"} Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.206321 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.206464 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.206563 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.212519 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" event={"ID":"01b1c815-0612-4834-85a8-4662893adcc7","Type":"ContainerStarted","Data":"88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c"} Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.228294 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.243353 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.256878 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.269870 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.286268 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.288793 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.288827 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.288838 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.288859 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.288858 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.288875 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:11Z","lastTransitionTime":"2025-11-25T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.289948 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.316250 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.331841 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.346414 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.359839 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.371522 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.388448 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.390907 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.390958 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.390971 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.390987 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.390998 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:11Z","lastTransitionTime":"2025-11-25T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.406652 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.419298 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.430544 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.444028 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.459168 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.474538 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.489544 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.493704 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.493749 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.493762 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.493781 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.493795 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:11Z","lastTransitionTime":"2025-11-25T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.506385 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.529095 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.542553 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.554533 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.566137 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.597504 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.597534 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.597550 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.597574 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.597591 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:11Z","lastTransitionTime":"2025-11-25T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.607464 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.641429 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.654991 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.668686 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.681057 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.692126 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.699692 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.699727 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.699737 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.699751 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.699762 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:11Z","lastTransitionTime":"2025-11-25T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.703794 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:11Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.802333 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.802403 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.802423 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.802449 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.802466 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:11Z","lastTransitionTime":"2025-11-25T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.905511 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.905643 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.905667 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.905695 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:11 crc kubenswrapper[5043]: I1125 07:16:11.905716 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:11Z","lastTransitionTime":"2025-11-25T07:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.008327 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.008378 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.008392 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.008423 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.008438 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:12Z","lastTransitionTime":"2025-11-25T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.110911 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.110986 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.111007 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.111030 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.111048 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:12Z","lastTransitionTime":"2025-11-25T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.214219 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.214287 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.214307 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.214335 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.214357 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:12Z","lastTransitionTime":"2025-11-25T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.221059 5043 generic.go:334] "Generic (PLEG): container finished" podID="01b1c815-0612-4834-85a8-4662893adcc7" containerID="88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c" exitCode=0 Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.221149 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" event={"ID":"01b1c815-0612-4834-85a8-4662893adcc7","Type":"ContainerDied","Data":"88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c"} Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.238874 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:12Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.258192 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:12Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.279825 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:12Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.316860 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:12Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.317164 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.317211 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.317233 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.317263 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.317284 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:12Z","lastTransitionTime":"2025-11-25T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.336215 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:12Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.352817 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:12Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.368482 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:12Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.384035 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:12Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.410156 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:12Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.419924 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.420052 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.420080 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.420111 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.420134 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:12Z","lastTransitionTime":"2025-11-25T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.427900 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:12Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.444210 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:12Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.457581 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:12Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.473281 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:12Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.490512 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:12Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.503990 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:12Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.523268 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.523325 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.523339 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.523366 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.523380 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:12Z","lastTransitionTime":"2025-11-25T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.626954 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.627060 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.627082 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.627115 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.627138 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:12Z","lastTransitionTime":"2025-11-25T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.694436 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:12 crc kubenswrapper[5043]: E1125 07:16:12.694735 5043 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 07:16:12 crc kubenswrapper[5043]: E1125 07:16:12.694872 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:28.694837602 +0000 UTC m=+52.863033373 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.731351 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.731410 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.731424 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.731443 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.731455 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:12Z","lastTransitionTime":"2025-11-25T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.796008 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.796158 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.796234 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:12 crc kubenswrapper[5043]: E1125 07:16:12.796259 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:16:28.796226591 +0000 UTC m=+52.964422342 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.796309 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:12 crc kubenswrapper[5043]: E1125 07:16:12.796407 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 07:16:12 crc kubenswrapper[5043]: E1125 07:16:12.796438 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 07:16:12 crc kubenswrapper[5043]: E1125 07:16:12.796458 5043 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:16:12 crc kubenswrapper[5043]: E1125 07:16:12.796538 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:28.79651769 +0000 UTC m=+52.964713451 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:16:12 crc kubenswrapper[5043]: E1125 07:16:12.796710 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 07:16:12 crc kubenswrapper[5043]: E1125 07:16:12.796815 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 07:16:12 crc kubenswrapper[5043]: E1125 07:16:12.796844 5043 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:16:12 crc kubenswrapper[5043]: E1125 07:16:12.796718 5043 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 07:16:12 crc kubenswrapper[5043]: E1125 07:16:12.796981 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:28.796943242 +0000 UTC m=+52.965139003 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:16:12 crc kubenswrapper[5043]: E1125 07:16:12.797265 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:28.797176318 +0000 UTC m=+52.965372079 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.834745 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.834798 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.834815 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.834839 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.834858 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:12Z","lastTransitionTime":"2025-11-25T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.937764 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.937840 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.937859 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.937886 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.937905 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:12Z","lastTransitionTime":"2025-11-25T07:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.962803 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.962924 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:12 crc kubenswrapper[5043]: E1125 07:16:12.962988 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:12 crc kubenswrapper[5043]: E1125 07:16:12.963183 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:12 crc kubenswrapper[5043]: I1125 07:16:12.963398 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:12 crc kubenswrapper[5043]: E1125 07:16:12.963556 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.041653 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.042103 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.042243 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.042340 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.042421 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:13Z","lastTransitionTime":"2025-11-25T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.145959 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.146002 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.146012 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.146030 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.146044 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:13Z","lastTransitionTime":"2025-11-25T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.202679 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.202747 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.202763 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.202781 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.202792 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:13Z","lastTransitionTime":"2025-11-25T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:13 crc kubenswrapper[5043]: E1125 07:16:13.214476 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.219095 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.219159 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.219177 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.219204 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.219222 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:13Z","lastTransitionTime":"2025-11-25T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.227218 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" event={"ID":"01b1c815-0612-4834-85a8-4662893adcc7","Type":"ContainerStarted","Data":"9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b"} Nov 25 07:16:13 crc kubenswrapper[5043]: E1125 07:16:13.234947 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.240009 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.240047 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.240059 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.240076 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.240088 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:13Z","lastTransitionTime":"2025-11-25T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.242330 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: E1125 07:16:13.253100 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.258158 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.258726 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.259138 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.259148 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.259161 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.259173 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:13Z","lastTransitionTime":"2025-11-25T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:13 crc kubenswrapper[5043]: E1125 07:16:13.274663 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.280110 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.280178 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.280200 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.280229 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.280253 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:13Z","lastTransitionTime":"2025-11-25T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.284239 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: E1125 07:16:13.293579 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: E1125 07:16:13.293849 5043 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.298972 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.299012 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.299022 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.299041 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.298981 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.299051 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:13Z","lastTransitionTime":"2025-11-25T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.314328 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.330821 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.349227 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.367195 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.387695 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.401747 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.401797 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.401809 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.401826 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.401838 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:13Z","lastTransitionTime":"2025-11-25T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.407775 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.429205 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.447218 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.476686 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.493110 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.505321 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.505392 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.505433 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.505466 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.505503 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:13Z","lastTransitionTime":"2025-11-25T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.508154 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:13Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.607815 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.607858 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.607869 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.607886 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.607898 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:13Z","lastTransitionTime":"2025-11-25T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.710341 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.710387 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.710398 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.710414 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.710425 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:13Z","lastTransitionTime":"2025-11-25T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.813642 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.813713 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.813739 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.813768 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.813793 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:13Z","lastTransitionTime":"2025-11-25T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.916818 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.916882 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.916901 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.916926 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:13 crc kubenswrapper[5043]: I1125 07:16:13.916943 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:13Z","lastTransitionTime":"2025-11-25T07:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.019135 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.019204 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.019223 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.019253 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.019275 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:14Z","lastTransitionTime":"2025-11-25T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.121839 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.121923 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.121956 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.121992 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.122013 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:14Z","lastTransitionTime":"2025-11-25T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.224985 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.225046 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.225059 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.225083 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.225098 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:14Z","lastTransitionTime":"2025-11-25T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.237966 5043 generic.go:334] "Generic (PLEG): container finished" podID="01b1c815-0612-4834-85a8-4662893adcc7" containerID="9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b" exitCode=0 Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.238035 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" event={"ID":"01b1c815-0612-4834-85a8-4662893adcc7","Type":"ContainerDied","Data":"9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b"} Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.260312 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:14Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.287366 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:14Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.312255 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:14Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.326716 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:14Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.327924 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.327960 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.327971 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.327987 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.327998 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:14Z","lastTransitionTime":"2025-11-25T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.339302 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:14Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.352463 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:14Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.368110 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:14Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.378627 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:14Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.389440 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:14Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.402804 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:14Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.414385 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:14Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.427905 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:14Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.429811 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.429887 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.429912 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.429942 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.429965 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:14Z","lastTransitionTime":"2025-11-25T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.451989 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:14Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.467122 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:14Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.478069 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:14Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.531705 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.531773 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.531795 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.531822 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.531842 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:14Z","lastTransitionTime":"2025-11-25T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.635128 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.635502 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.635528 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.635559 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.635699 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:14Z","lastTransitionTime":"2025-11-25T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.738134 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.738198 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.738214 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.738238 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.738255 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:14Z","lastTransitionTime":"2025-11-25T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.840556 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.840627 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.840643 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.840661 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.840674 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:14Z","lastTransitionTime":"2025-11-25T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.943070 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.943121 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.943169 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.943188 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.943202 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:14Z","lastTransitionTime":"2025-11-25T07:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.962666 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.962733 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:14 crc kubenswrapper[5043]: I1125 07:16:14.962891 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:14 crc kubenswrapper[5043]: E1125 07:16:14.962882 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:14 crc kubenswrapper[5043]: E1125 07:16:14.963157 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:14 crc kubenswrapper[5043]: E1125 07:16:14.963408 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.047021 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.047088 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.047107 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.047133 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.047151 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:15Z","lastTransitionTime":"2025-11-25T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.149017 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.149059 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.149070 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.149087 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.149098 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:15Z","lastTransitionTime":"2025-11-25T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.248397 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" event={"ID":"01b1c815-0612-4834-85a8-4662893adcc7","Type":"ContainerStarted","Data":"90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24"} Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.251221 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.251275 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.251315 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.251339 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.251354 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:15Z","lastTransitionTime":"2025-11-25T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.265306 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.279915 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.290138 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.308521 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.328976 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.344870 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.353113 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.353146 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.353154 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.353168 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.353176 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:15Z","lastTransitionTime":"2025-11-25T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.357949 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.383632 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.399666 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.417932 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.429241 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.441677 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.454504 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.455596 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.455668 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.455683 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.455703 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.455718 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:15Z","lastTransitionTime":"2025-11-25T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.468444 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.480428 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.558412 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.558475 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.558495 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.558523 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.558544 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:15Z","lastTransitionTime":"2025-11-25T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.661512 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.661552 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.661564 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.661581 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.661592 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:15Z","lastTransitionTime":"2025-11-25T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.769418 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.769500 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.769521 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.769690 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.769738 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:15Z","lastTransitionTime":"2025-11-25T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.773241 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b"] Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.774010 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.776920 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.777489 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.798235 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.815527 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.853158 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.871691 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.873345 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.873408 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.873421 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.873446 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.873464 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:15Z","lastTransitionTime":"2025-11-25T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.893752 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.907295 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.919317 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.929273 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.930105 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5ce932f2-a1f0-4e68-8116-462d043d6a4f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-t545b\" (UID: \"5ce932f2-a1f0-4e68-8116-462d043d6a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.930212 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5ce932f2-a1f0-4e68-8116-462d043d6a4f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-t545b\" (UID: \"5ce932f2-a1f0-4e68-8116-462d043d6a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.930241 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqj7j\" (UniqueName: \"kubernetes.io/projected/5ce932f2-a1f0-4e68-8116-462d043d6a4f-kube-api-access-rqj7j\") pod \"ovnkube-control-plane-749d76644c-t545b\" (UID: \"5ce932f2-a1f0-4e68-8116-462d043d6a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.930275 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5ce932f2-a1f0-4e68-8116-462d043d6a4f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-t545b\" (UID: \"5ce932f2-a1f0-4e68-8116-462d043d6a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.945001 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.955592 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.967859 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.976678 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.976742 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.976763 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.976789 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.976810 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:15Z","lastTransitionTime":"2025-11-25T07:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.981240 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:15 crc kubenswrapper[5043]: I1125 07:16:15.990690 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.008405 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:16Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.021928 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:16Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.031350 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5ce932f2-a1f0-4e68-8116-462d043d6a4f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-t545b\" (UID: \"5ce932f2-a1f0-4e68-8116-462d043d6a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.031440 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5ce932f2-a1f0-4e68-8116-462d043d6a4f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-t545b\" (UID: \"5ce932f2-a1f0-4e68-8116-462d043d6a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.031469 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqj7j\" (UniqueName: \"kubernetes.io/projected/5ce932f2-a1f0-4e68-8116-462d043d6a4f-kube-api-access-rqj7j\") pod \"ovnkube-control-plane-749d76644c-t545b\" (UID: \"5ce932f2-a1f0-4e68-8116-462d043d6a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.031504 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5ce932f2-a1f0-4e68-8116-462d043d6a4f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-t545b\" (UID: \"5ce932f2-a1f0-4e68-8116-462d043d6a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.032063 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5ce932f2-a1f0-4e68-8116-462d043d6a4f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-t545b\" (UID: \"5ce932f2-a1f0-4e68-8116-462d043d6a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.032227 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5ce932f2-a1f0-4e68-8116-462d043d6a4f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-t545b\" (UID: \"5ce932f2-a1f0-4e68-8116-462d043d6a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.034755 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:16Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.037079 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5ce932f2-a1f0-4e68-8116-462d043d6a4f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-t545b\" (UID: \"5ce932f2-a1f0-4e68-8116-462d043d6a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.047552 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqj7j\" (UniqueName: \"kubernetes.io/projected/5ce932f2-a1f0-4e68-8116-462d043d6a4f-kube-api-access-rqj7j\") pod \"ovnkube-control-plane-749d76644c-t545b\" (UID: \"5ce932f2-a1f0-4e68-8116-462d043d6a4f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.078747 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.078782 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.078792 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.078807 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.078817 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:16Z","lastTransitionTime":"2025-11-25T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.097302 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" Nov 25 07:16:16 crc kubenswrapper[5043]: W1125 07:16:16.113907 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ce932f2_a1f0_4e68_8116_462d043d6a4f.slice/crio-8e79c6b24ed993b77e254ac7bf4fe5a7a5ef4cbad94d9421d80e0ed21f553102 WatchSource:0}: Error finding container 8e79c6b24ed993b77e254ac7bf4fe5a7a5ef4cbad94d9421d80e0ed21f553102: Status 404 returned error can't find the container with id 8e79c6b24ed993b77e254ac7bf4fe5a7a5ef4cbad94d9421d80e0ed21f553102 Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.181376 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.181419 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.181430 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.181442 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.181451 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:16Z","lastTransitionTime":"2025-11-25T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.255472 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" event={"ID":"5ce932f2-a1f0-4e68-8116-462d043d6a4f","Type":"ContainerStarted","Data":"8e79c6b24ed993b77e254ac7bf4fe5a7a5ef4cbad94d9421d80e0ed21f553102"} Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.284504 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.284529 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.284537 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.284550 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.284560 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:16Z","lastTransitionTime":"2025-11-25T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.387339 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.387373 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.387383 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.387397 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.387416 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:16Z","lastTransitionTime":"2025-11-25T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.490946 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.491008 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.491025 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.491053 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.491071 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:16Z","lastTransitionTime":"2025-11-25T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.593998 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.594038 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.594049 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.594066 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.594078 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:16Z","lastTransitionTime":"2025-11-25T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.697804 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.697891 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.697915 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.697948 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.697971 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:16Z","lastTransitionTime":"2025-11-25T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.800955 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.801422 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.801443 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.801472 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.801490 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:16Z","lastTransitionTime":"2025-11-25T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.905025 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.905088 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.905101 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.905129 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.905145 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:16Z","lastTransitionTime":"2025-11-25T07:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.962056 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.962115 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.962283 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:16 crc kubenswrapper[5043]: E1125 07:16:16.962269 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:16 crc kubenswrapper[5043]: E1125 07:16:16.962635 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:16 crc kubenswrapper[5043]: E1125 07:16:16.963302 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.965297 5043 scope.go:117] "RemoveContainer" containerID="e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054" Nov 25 07:16:16 crc kubenswrapper[5043]: I1125 07:16:16.985933 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:16Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.002732 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.009030 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.009131 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.009154 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.009218 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.009239 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:17Z","lastTransitionTime":"2025-11-25T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.018330 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.034442 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.067537 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.085436 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.102971 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.112889 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.112951 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.112973 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.113004 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.113027 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:17Z","lastTransitionTime":"2025-11-25T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.121569 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.141996 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.166123 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.183338 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.201378 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.218078 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.218967 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.219033 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.219056 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.219085 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.219106 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:17Z","lastTransitionTime":"2025-11-25T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.235746 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.253308 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.266124 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.268100 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6"} Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.270062 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" event={"ID":"5ce932f2-a1f0-4e68-8116-462d043d6a4f","Type":"ContainerStarted","Data":"5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2"} Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.270168 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" event={"ID":"5ce932f2-a1f0-4e68-8116-462d043d6a4f","Type":"ContainerStarted","Data":"40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277"} Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.273323 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.273386 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovnkube-controller/0.log" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.275996 5043 generic.go:334] "Generic (PLEG): container finished" podID="a8785a4c-82ff-4a78-83a0-463e977df530" containerID="f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b" exitCode=1 Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.276061 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerDied","Data":"f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b"} Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.276968 5043 scope.go:117] "RemoveContainer" containerID="f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.304210 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.318940 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.321554 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.321614 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.321629 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.321647 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.321659 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:17Z","lastTransitionTime":"2025-11-25T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.336905 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.353915 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.365682 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.378529 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.387784 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.398746 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.409060 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.418487 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.423661 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.423689 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.423696 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.423709 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.423720 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:17Z","lastTransitionTime":"2025-11-25T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.431998 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.453463 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.468776 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.481620 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.494971 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.509353 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.519727 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.526008 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.526029 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.526040 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.526054 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.526066 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:17Z","lastTransitionTime":"2025-11-25T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.528688 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.536794 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.545084 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.554894 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.567355 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.577912 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.591075 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.609179 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.624954 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.628615 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.628640 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.628648 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.628662 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.628673 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:17Z","lastTransitionTime":"2025-11-25T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.644180 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.661686 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.677367 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.679909 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xqj4m"] Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.680310 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:17 crc kubenswrapper[5043]: E1125 07:16:17.680360 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.706690 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:16Z\\\",\\\"message\\\":\\\".AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 07:16:16.257714 6353 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 07:16:16.257820 6353 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 07:16:16.257879 6353 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 07:16:16.257941 6353 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 07:16:16.258187 6353 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 07:16:16.258208 6353 factory.go:656] Stopping watch factory\\\\nI1125 07:16:16.258230 6353 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 07:16:16.258211 6353 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.727063 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.730892 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.730918 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.730925 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.730939 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.730947 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:17Z","lastTransitionTime":"2025-11-25T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.740752 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.748467 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbw7r\" (UniqueName: \"kubernetes.io/projected/e26eab68-d56e-4c83-9888-0a866e549524-kube-api-access-vbw7r\") pod \"network-metrics-daemon-xqj4m\" (UID: \"e26eab68-d56e-4c83-9888-0a866e549524\") " pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.748510 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs\") pod \"network-metrics-daemon-xqj4m\" (UID: \"e26eab68-d56e-4c83-9888-0a866e549524\") " pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.752361 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.793100 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.804696 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.816904 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.827480 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.832904 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.832957 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.832966 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.832981 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.832991 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:17Z","lastTransitionTime":"2025-11-25T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.841890 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.849050 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbw7r\" (UniqueName: \"kubernetes.io/projected/e26eab68-d56e-4c83-9888-0a866e549524-kube-api-access-vbw7r\") pod \"network-metrics-daemon-xqj4m\" (UID: \"e26eab68-d56e-4c83-9888-0a866e549524\") " pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.849098 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs\") pod \"network-metrics-daemon-xqj4m\" (UID: \"e26eab68-d56e-4c83-9888-0a866e549524\") " pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:17 crc kubenswrapper[5043]: E1125 07:16:17.849179 5043 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 07:16:17 crc kubenswrapper[5043]: E1125 07:16:17.849222 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs podName:e26eab68-d56e-4c83-9888-0a866e549524 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:18.349208653 +0000 UTC m=+42.517404374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs") pod "network-metrics-daemon-xqj4m" (UID: "e26eab68-d56e-4c83-9888-0a866e549524") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.857882 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.868404 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbw7r\" (UniqueName: \"kubernetes.io/projected/e26eab68-d56e-4c83-9888-0a866e549524-kube-api-access-vbw7r\") pod \"network-metrics-daemon-xqj4m\" (UID: \"e26eab68-d56e-4c83-9888-0a866e549524\") " pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.870891 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.882111 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.894584 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.913642 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.926139 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.935012 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.935039 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.935047 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.935060 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.935070 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:17Z","lastTransitionTime":"2025-11-25T07:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.938926 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.953197 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.966224 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.986142 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:16Z\\\",\\\"message\\\":\\\".AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 07:16:16.257714 6353 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 07:16:16.257820 6353 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 07:16:16.257879 6353 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 07:16:16.257941 6353 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 07:16:16.258187 6353 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 07:16:16.258208 6353 factory.go:656] Stopping watch factory\\\\nI1125 07:16:16.258230 6353 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 07:16:16.258211 6353 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:17 crc kubenswrapper[5043]: I1125 07:16:17.998711 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.037832 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.037868 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.037877 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.037890 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.037900 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:18Z","lastTransitionTime":"2025-11-25T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.139624 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.139655 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.139663 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.139676 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.139685 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:18Z","lastTransitionTime":"2025-11-25T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.241929 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.241971 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.241982 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.241996 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.242005 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:18Z","lastTransitionTime":"2025-11-25T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.283482 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovnkube-controller/1.log" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.284564 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovnkube-controller/0.log" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.287416 5043 generic.go:334] "Generic (PLEG): container finished" podID="a8785a4c-82ff-4a78-83a0-463e977df530" containerID="76f06fde9cb69633c51800c9c3698402db4f9024b9b033be018380c90bd10d53" exitCode=1 Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.287506 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerDied","Data":"76f06fde9cb69633c51800c9c3698402db4f9024b9b033be018380c90bd10d53"} Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.287581 5043 scope.go:117] "RemoveContainer" containerID="f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.288000 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.288741 5043 scope.go:117] "RemoveContainer" containerID="76f06fde9cb69633c51800c9c3698402db4f9024b9b033be018380c90bd10d53" Nov 25 07:16:18 crc kubenswrapper[5043]: E1125 07:16:18.288945 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.300567 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.310507 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.320148 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.339359 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.343709 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.343758 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.343776 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.343801 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.343817 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:18Z","lastTransitionTime":"2025-11-25T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.352751 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.354130 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs\") pod \"network-metrics-daemon-xqj4m\" (UID: \"e26eab68-d56e-4c83-9888-0a866e549524\") " pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:18 crc kubenswrapper[5043]: E1125 07:16:18.354574 5043 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 07:16:18 crc kubenswrapper[5043]: E1125 07:16:18.354679 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs podName:e26eab68-d56e-4c83-9888-0a866e549524 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:19.354652883 +0000 UTC m=+43.522848614 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs") pod "network-metrics-daemon-xqj4m" (UID: "e26eab68-d56e-4c83-9888-0a866e549524") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.365933 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.377076 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.391250 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.412885 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.425449 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.437090 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.446023 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.446063 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.446077 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.446092 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.446104 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:18Z","lastTransitionTime":"2025-11-25T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.454302 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.466580 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.493587 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:16Z\\\",\\\"message\\\":\\\".AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 07:16:16.257714 6353 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 07:16:16.257820 6353 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 07:16:16.257879 6353 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 07:16:16.257941 6353 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 07:16:16.258187 6353 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 07:16:16.258208 6353 factory.go:656] Stopping watch factory\\\\nI1125 07:16:16.258230 6353 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 07:16:16.258211 6353 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.507772 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.519180 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.528905 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.539101 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.548527 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.548569 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.548590 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.548629 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.548641 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:18Z","lastTransitionTime":"2025-11-25T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.549497 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.560224 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.570747 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.579010 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.590048 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.601518 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.612936 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.631295 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.650539 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.650579 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.650589 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.650644 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.650659 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:18Z","lastTransitionTime":"2025-11-25T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.673892 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.723041 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.753406 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.753478 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.753490 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.753553 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.753574 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:18Z","lastTransitionTime":"2025-11-25T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.756687 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.798975 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.844261 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.856765 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.856829 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.856852 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.856875 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.856888 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:18Z","lastTransitionTime":"2025-11-25T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.874042 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.918187 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f06fde9cb69633c51800c9c3698402db4f9024b9b033be018380c90bd10d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:16Z\\\",\\\"message\\\":\\\".AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 07:16:16.257714 6353 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 07:16:16.257820 6353 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 07:16:16.257879 6353 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 07:16:16.257941 6353 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 07:16:16.258187 6353 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 07:16:16.258208 6353 factory.go:656] Stopping watch factory\\\\nI1125 07:16:16.258230 6353 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 07:16:16.258211 6353 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76f06fde9cb69633c51800c9c3698402db4f9024b9b033be018380c90bd10d53\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:18Z\\\",\\\"message\\\":\\\"d: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z]\\\\nI1125 07:16:18.106906 6602 services_controller.go:360] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes for network=default : 80.242µs\\\\nI1125 07:16:18.106838 6602 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nI1125 07:16:18.106911 6602 services_controller.go:443] Built service openshift-machine-api/machine-api-controllers LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8441, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.956147 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.961117 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.961189 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.961210 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.961294 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.961315 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:18Z","lastTransitionTime":"2025-11-25T07:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.961784 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.961793 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:18 crc kubenswrapper[5043]: E1125 07:16:18.961885 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:18 crc kubenswrapper[5043]: I1125 07:16:18.961960 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:18 crc kubenswrapper[5043]: E1125 07:16:18.962075 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:18 crc kubenswrapper[5043]: E1125 07:16:18.962172 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.064557 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.064656 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.064682 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.064713 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.064736 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:19Z","lastTransitionTime":"2025-11-25T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.167642 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.167701 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.167714 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.167733 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.167748 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:19Z","lastTransitionTime":"2025-11-25T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.270713 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.270757 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.270769 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.270785 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.270795 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:19Z","lastTransitionTime":"2025-11-25T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.292822 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovnkube-controller/1.log" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.366081 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs\") pod \"network-metrics-daemon-xqj4m\" (UID: \"e26eab68-d56e-4c83-9888-0a866e549524\") " pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:19 crc kubenswrapper[5043]: E1125 07:16:19.366334 5043 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 07:16:19 crc kubenswrapper[5043]: E1125 07:16:19.366683 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs podName:e26eab68-d56e-4c83-9888-0a866e549524 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:21.366645242 +0000 UTC m=+45.534841003 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs") pod "network-metrics-daemon-xqj4m" (UID: "e26eab68-d56e-4c83-9888-0a866e549524") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.374247 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.374313 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.374327 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.374348 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.374361 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:19Z","lastTransitionTime":"2025-11-25T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.477952 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.478003 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.478017 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.478036 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.478047 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:19Z","lastTransitionTime":"2025-11-25T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.580873 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.580932 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.580956 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.580983 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.581006 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:19Z","lastTransitionTime":"2025-11-25T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.684514 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.684588 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.684717 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.684755 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.684778 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:19Z","lastTransitionTime":"2025-11-25T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.788109 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.788165 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.788181 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.788204 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.788221 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:19Z","lastTransitionTime":"2025-11-25T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.891953 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.892001 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.892013 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.892032 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.892044 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:19Z","lastTransitionTime":"2025-11-25T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.962261 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:19 crc kubenswrapper[5043]: E1125 07:16:19.962513 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.994876 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.994939 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.994956 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.994980 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:19 crc kubenswrapper[5043]: I1125 07:16:19.994997 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:19Z","lastTransitionTime":"2025-11-25T07:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.098475 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.098530 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.098548 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.098574 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.098594 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:20Z","lastTransitionTime":"2025-11-25T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.201886 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.201945 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.201962 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.201989 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.202007 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:20Z","lastTransitionTime":"2025-11-25T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.305057 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.305134 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.305152 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.305173 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.305191 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:20Z","lastTransitionTime":"2025-11-25T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.407959 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.408017 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.408033 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.408056 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.408074 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:20Z","lastTransitionTime":"2025-11-25T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.511118 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.511190 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.511208 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.511233 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.511252 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:20Z","lastTransitionTime":"2025-11-25T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.614418 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.614506 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.614522 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.614552 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.614569 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:20Z","lastTransitionTime":"2025-11-25T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.718062 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.718131 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.718145 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.718168 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.718183 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:20Z","lastTransitionTime":"2025-11-25T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.821644 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.821709 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.821733 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.821762 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.821783 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:20Z","lastTransitionTime":"2025-11-25T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.925420 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.925491 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.925522 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.925542 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.925553 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:20Z","lastTransitionTime":"2025-11-25T07:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.962392 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.962408 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:20 crc kubenswrapper[5043]: E1125 07:16:20.962725 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:20 crc kubenswrapper[5043]: I1125 07:16:20.962405 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:20 crc kubenswrapper[5043]: E1125 07:16:20.962904 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:20 crc kubenswrapper[5043]: E1125 07:16:20.963087 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.027942 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.028000 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.028013 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.028032 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.028045 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:21Z","lastTransitionTime":"2025-11-25T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.131591 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.132112 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.132301 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.132486 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.132729 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:21Z","lastTransitionTime":"2025-11-25T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.236225 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.236931 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.237050 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.237189 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.237263 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:21Z","lastTransitionTime":"2025-11-25T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.339444 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.339484 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.339497 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.339511 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.339520 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:21Z","lastTransitionTime":"2025-11-25T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.389508 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs\") pod \"network-metrics-daemon-xqj4m\" (UID: \"e26eab68-d56e-4c83-9888-0a866e549524\") " pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:21 crc kubenswrapper[5043]: E1125 07:16:21.389685 5043 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 07:16:21 crc kubenswrapper[5043]: E1125 07:16:21.389739 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs podName:e26eab68-d56e-4c83-9888-0a866e549524 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:25.389724705 +0000 UTC m=+49.557920426 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs") pod "network-metrics-daemon-xqj4m" (UID: "e26eab68-d56e-4c83-9888-0a866e549524") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.442096 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.442134 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.442145 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.442163 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.442176 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:21Z","lastTransitionTime":"2025-11-25T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.545259 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.545313 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.545325 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.545342 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.545354 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:21Z","lastTransitionTime":"2025-11-25T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.648822 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.648910 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.648944 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.648981 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.649003 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:21Z","lastTransitionTime":"2025-11-25T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.752333 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.752385 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.752402 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.752428 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.752453 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:21Z","lastTransitionTime":"2025-11-25T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.855325 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.855390 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.855405 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.855455 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.855473 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:21Z","lastTransitionTime":"2025-11-25T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.957940 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.958019 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.958032 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.958049 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.958061 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:21Z","lastTransitionTime":"2025-11-25T07:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:21 crc kubenswrapper[5043]: I1125 07:16:21.962451 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:21 crc kubenswrapper[5043]: E1125 07:16:21.962767 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.060978 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.061047 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.061064 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.061093 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.061109 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:22Z","lastTransitionTime":"2025-11-25T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.163779 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.163834 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.163845 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.163861 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.163872 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:22Z","lastTransitionTime":"2025-11-25T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.266631 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.266684 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.266698 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.266717 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.266732 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:22Z","lastTransitionTime":"2025-11-25T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.369933 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.370002 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.370020 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.370043 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.370061 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:22Z","lastTransitionTime":"2025-11-25T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.473251 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.473304 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.473318 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.473337 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.473352 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:22Z","lastTransitionTime":"2025-11-25T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.576481 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.576559 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.576591 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.576932 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.576965 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:22Z","lastTransitionTime":"2025-11-25T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.680964 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.681023 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.681041 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.681075 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.681092 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:22Z","lastTransitionTime":"2025-11-25T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.784061 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.784113 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.784130 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.784152 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.784170 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:22Z","lastTransitionTime":"2025-11-25T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.887474 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.887568 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.887596 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.887676 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.887702 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:22Z","lastTransitionTime":"2025-11-25T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.961821 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.961845 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.961867 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:22 crc kubenswrapper[5043]: E1125 07:16:22.962069 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:22 crc kubenswrapper[5043]: E1125 07:16:22.962194 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:22 crc kubenswrapper[5043]: E1125 07:16:22.962331 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.991725 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.991804 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.991821 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.991846 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:22 crc kubenswrapper[5043]: I1125 07:16:22.991865 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:22Z","lastTransitionTime":"2025-11-25T07:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.095244 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.095304 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.095321 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.095345 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.095362 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:23Z","lastTransitionTime":"2025-11-25T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.198707 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.198774 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.198791 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.198815 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.198832 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:23Z","lastTransitionTime":"2025-11-25T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.301230 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.301268 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.301282 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.301305 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.301325 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:23Z","lastTransitionTime":"2025-11-25T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.404664 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.404726 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.404755 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.404785 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.404807 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:23Z","lastTransitionTime":"2025-11-25T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.461825 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.461908 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.461932 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.461964 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.461986 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:23Z","lastTransitionTime":"2025-11-25T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:23 crc kubenswrapper[5043]: E1125 07:16:23.485675 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:23Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.491271 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.491335 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.491348 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.491373 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.491389 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:23Z","lastTransitionTime":"2025-11-25T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:23 crc kubenswrapper[5043]: E1125 07:16:23.513573 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:23Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.519076 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.519122 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.519143 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.519161 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.519175 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:23Z","lastTransitionTime":"2025-11-25T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:23 crc kubenswrapper[5043]: E1125 07:16:23.539871 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:23Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.544935 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.544970 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.544988 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.545011 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.545030 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:23Z","lastTransitionTime":"2025-11-25T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:23 crc kubenswrapper[5043]: E1125 07:16:23.566148 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:23Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.571590 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.571741 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.571767 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.571798 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.571821 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:23Z","lastTransitionTime":"2025-11-25T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:23 crc kubenswrapper[5043]: E1125 07:16:23.588534 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:23Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:23 crc kubenswrapper[5043]: E1125 07:16:23.588881 5043 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.590507 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.590561 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.590579 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.590629 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.590648 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:23Z","lastTransitionTime":"2025-11-25T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.694435 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.694973 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.695105 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.695231 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.695366 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:23Z","lastTransitionTime":"2025-11-25T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.798386 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.798689 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.798813 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.798923 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.799010 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:23Z","lastTransitionTime":"2025-11-25T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.902481 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.902731 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.902813 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.902922 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.903006 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:23Z","lastTransitionTime":"2025-11-25T07:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:23 crc kubenswrapper[5043]: I1125 07:16:23.962269 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:23 crc kubenswrapper[5043]: E1125 07:16:23.962535 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.005494 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.005562 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.005585 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.005656 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.005682 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:24Z","lastTransitionTime":"2025-11-25T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.108425 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.108471 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.108481 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.108496 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.108507 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:24Z","lastTransitionTime":"2025-11-25T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.211486 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.211672 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.211712 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.211748 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.211766 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:24Z","lastTransitionTime":"2025-11-25T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.313847 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.313891 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.313902 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.313920 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.313931 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:24Z","lastTransitionTime":"2025-11-25T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.416116 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.416173 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.416187 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.416209 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.416221 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:24Z","lastTransitionTime":"2025-11-25T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.518799 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.518839 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.518848 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.518861 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.518870 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:24Z","lastTransitionTime":"2025-11-25T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.621312 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.621382 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.621423 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.621455 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.621478 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:24Z","lastTransitionTime":"2025-11-25T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.724644 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.724718 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.724738 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.724761 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.724780 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:24Z","lastTransitionTime":"2025-11-25T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.827494 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.827568 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.827580 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.827598 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.827632 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:24Z","lastTransitionTime":"2025-11-25T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.930369 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.930405 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.930418 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.930436 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.930448 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:24Z","lastTransitionTime":"2025-11-25T07:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.962070 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.962140 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:24 crc kubenswrapper[5043]: I1125 07:16:24.962190 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:24 crc kubenswrapper[5043]: E1125 07:16:24.962369 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:24 crc kubenswrapper[5043]: E1125 07:16:24.962502 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:24 crc kubenswrapper[5043]: E1125 07:16:24.962709 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.033825 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.033900 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.033924 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.033957 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.033984 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:25Z","lastTransitionTime":"2025-11-25T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.136589 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.136689 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.136712 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.136742 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.136759 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:25Z","lastTransitionTime":"2025-11-25T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.239936 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.239991 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.240008 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.240032 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.240049 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:25Z","lastTransitionTime":"2025-11-25T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.342711 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.342775 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.342801 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.342834 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.342857 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:25Z","lastTransitionTime":"2025-11-25T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.431515 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs\") pod \"network-metrics-daemon-xqj4m\" (UID: \"e26eab68-d56e-4c83-9888-0a866e549524\") " pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:25 crc kubenswrapper[5043]: E1125 07:16:25.431860 5043 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 07:16:25 crc kubenswrapper[5043]: E1125 07:16:25.432015 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs podName:e26eab68-d56e-4c83-9888-0a866e549524 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:33.431977853 +0000 UTC m=+57.600173614 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs") pod "network-metrics-daemon-xqj4m" (UID: "e26eab68-d56e-4c83-9888-0a866e549524") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.445523 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.445585 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.445638 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.445683 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.445706 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:25Z","lastTransitionTime":"2025-11-25T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.548158 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.548203 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.548214 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.548230 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.548241 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:25Z","lastTransitionTime":"2025-11-25T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.650984 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.651055 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.651077 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.651104 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.651126 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:25Z","lastTransitionTime":"2025-11-25T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.760572 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.760614 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.760623 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.760638 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.760647 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:25Z","lastTransitionTime":"2025-11-25T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.863976 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.864039 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.864055 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.864080 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.864101 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:25Z","lastTransitionTime":"2025-11-25T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.902597 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.913126 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.923193 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:25Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.946358 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:25Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.962009 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:25 crc kubenswrapper[5043]: E1125 07:16:25.962181 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.968521 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.968576 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.968598 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.968661 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.968683 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:25Z","lastTransitionTime":"2025-11-25T07:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:25 crc kubenswrapper[5043]: I1125 07:16:25.979791 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:25Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.000408 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:25Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.025559 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:26Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.043960 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:26Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.064785 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:26Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.072255 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.072318 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.072335 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.072359 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.072374 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:26Z","lastTransitionTime":"2025-11-25T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.090247 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f06fde9cb69633c51800c9c3698402db4f9024b9b033be018380c90bd10d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:16Z\\\",\\\"message\\\":\\\".AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 07:16:16.257714 6353 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 07:16:16.257820 6353 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 07:16:16.257879 6353 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 07:16:16.257941 6353 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 07:16:16.258187 6353 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 07:16:16.258208 6353 factory.go:656] Stopping watch factory\\\\nI1125 07:16:16.258230 6353 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 07:16:16.258211 6353 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76f06fde9cb69633c51800c9c3698402db4f9024b9b033be018380c90bd10d53\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:18Z\\\",\\\"message\\\":\\\"d: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z]\\\\nI1125 07:16:18.106906 6602 services_controller.go:360] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes for network=default : 80.242µs\\\\nI1125 07:16:18.106838 6602 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nI1125 07:16:18.106911 6602 services_controller.go:443] Built service openshift-machine-api/machine-api-controllers LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8441, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:26Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.105942 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:26Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.121391 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:26Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.141526 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:26Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.159301 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:26Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.175307 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.175355 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.175370 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.175391 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.175408 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:26Z","lastTransitionTime":"2025-11-25T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.181825 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:26Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.194344 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:26Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.208867 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:26Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.222731 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:26Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.234019 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:26Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.278478 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.278531 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.278544 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.278563 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.278578 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:26Z","lastTransitionTime":"2025-11-25T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.381920 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.381973 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.381990 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.382010 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.382025 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:26Z","lastTransitionTime":"2025-11-25T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.486009 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.486076 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.486096 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.486138 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.486157 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:26Z","lastTransitionTime":"2025-11-25T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.589352 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.589437 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.589461 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.589489 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.589506 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:26Z","lastTransitionTime":"2025-11-25T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.692340 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.692431 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.692458 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.692495 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.692520 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:26Z","lastTransitionTime":"2025-11-25T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.796211 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.796276 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.796293 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.796319 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.796336 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:26Z","lastTransitionTime":"2025-11-25T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.900053 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.900106 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.900128 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.900149 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.900164 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:26Z","lastTransitionTime":"2025-11-25T07:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.962722 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.962955 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:26 crc kubenswrapper[5043]: E1125 07:16:26.963109 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.963130 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:26 crc kubenswrapper[5043]: E1125 07:16:26.963293 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:26 crc kubenswrapper[5043]: E1125 07:16:26.963505 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:26 crc kubenswrapper[5043]: I1125 07:16:26.998980 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:26Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.003338 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.003423 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.003443 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.003464 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.003480 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:27Z","lastTransitionTime":"2025-11-25T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.021983 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:27Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.041069 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:27Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.060031 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:27Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.077560 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:27Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.090991 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:27Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.106613 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.106644 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.106651 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.106664 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.106672 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:27Z","lastTransitionTime":"2025-11-25T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.107465 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:27Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.120843 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed17eb56-5921-4618-8de7-166c01019089\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1448111cb3e3b27389baafd33293fcb690b89e0f54007afba41778c91cb8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2869a4db622ee8d96a52f7c058914b01302bbeac8b81ed67aa9c87f77a7f7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5604ac4dec090e082d1843bc46f0857aad493c97e1d91208a938e7405333a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:27Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.133699 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:27Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.147547 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:27Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.160810 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:27Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.172907 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:27Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.198075 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f06fde9cb69633c51800c9c3698402db4f9024b9b033be018380c90bd10d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0a3905ba5150fe7d3a9c063d9f27cfbc34d4c2add1dd8e241ebe0e8732a335b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:16Z\\\",\\\"message\\\":\\\".AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 07:16:16.257714 6353 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1125 07:16:16.257820 6353 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 07:16:16.257879 6353 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 07:16:16.257941 6353 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 07:16:16.258187 6353 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 07:16:16.258208 6353 factory.go:656] Stopping watch factory\\\\nI1125 07:16:16.258230 6353 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 07:16:16.258211 6353 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76f06fde9cb69633c51800c9c3698402db4f9024b9b033be018380c90bd10d53\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:18Z\\\",\\\"message\\\":\\\"d: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z]\\\\nI1125 07:16:18.106906 6602 services_controller.go:360] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes for network=default : 80.242µs\\\\nI1125 07:16:18.106838 6602 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nI1125 07:16:18.106911 6602 services_controller.go:443] Built service openshift-machine-api/machine-api-controllers LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8441, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:27Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.211810 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:27Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.223421 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:27Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.228941 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.228978 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.228992 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.229010 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.229024 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:27Z","lastTransitionTime":"2025-11-25T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.237836 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:27Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.250759 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:27Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.262896 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:27Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.331354 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.331401 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.331415 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.331437 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.331451 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:27Z","lastTransitionTime":"2025-11-25T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.433324 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.433370 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.433382 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.433398 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.433410 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:27Z","lastTransitionTime":"2025-11-25T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.535774 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.535828 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.535840 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.535863 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.535878 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:27Z","lastTransitionTime":"2025-11-25T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.638864 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.638932 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.638950 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.638976 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.638995 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:27Z","lastTransitionTime":"2025-11-25T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.741970 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.742027 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.742043 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.742064 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.742080 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:27Z","lastTransitionTime":"2025-11-25T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.845179 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.845240 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.845257 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.845283 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.845303 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:27Z","lastTransitionTime":"2025-11-25T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.949095 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.949142 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.949152 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.949173 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.949184 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:27Z","lastTransitionTime":"2025-11-25T07:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:27 crc kubenswrapper[5043]: I1125 07:16:27.961743 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:27 crc kubenswrapper[5043]: E1125 07:16:27.961931 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.051297 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.051349 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.051364 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.051381 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.051394 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:28Z","lastTransitionTime":"2025-11-25T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.154521 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.154555 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.154563 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.154577 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.154587 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:28Z","lastTransitionTime":"2025-11-25T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.257658 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.257693 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.257701 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.257715 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.257724 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:28Z","lastTransitionTime":"2025-11-25T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.360645 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.360713 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.360730 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.360754 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.360773 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:28Z","lastTransitionTime":"2025-11-25T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.464068 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.464118 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.464127 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.464142 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.464151 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:28Z","lastTransitionTime":"2025-11-25T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.567719 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.567773 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.567790 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.567817 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.567834 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:28Z","lastTransitionTime":"2025-11-25T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.670641 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.670724 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.670750 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.670783 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.670806 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:28Z","lastTransitionTime":"2025-11-25T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.765596 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:28 crc kubenswrapper[5043]: E1125 07:16:28.765900 5043 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 07:16:28 crc kubenswrapper[5043]: E1125 07:16:28.766010 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 07:17:00.76598995 +0000 UTC m=+84.934185681 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.773238 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.773295 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.773314 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.773338 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.773356 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:28Z","lastTransitionTime":"2025-11-25T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.866667 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.866763 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.866820 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.866847 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:28 crc kubenswrapper[5043]: E1125 07:16:28.866945 5043 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 07:16:28 crc kubenswrapper[5043]: E1125 07:16:28.866981 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:17:00.866943258 +0000 UTC m=+85.035139029 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:16:28 crc kubenswrapper[5043]: E1125 07:16:28.867040 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 07:17:00.86702049 +0000 UTC m=+85.035216251 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 07:16:28 crc kubenswrapper[5043]: E1125 07:16:28.867076 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 07:16:28 crc kubenswrapper[5043]: E1125 07:16:28.867126 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 07:16:28 crc kubenswrapper[5043]: E1125 07:16:28.867152 5043 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:16:28 crc kubenswrapper[5043]: E1125 07:16:28.867232 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 07:17:00.867209846 +0000 UTC m=+85.035405607 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:16:28 crc kubenswrapper[5043]: E1125 07:16:28.867280 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 07:16:28 crc kubenswrapper[5043]: E1125 07:16:28.867377 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 07:16:28 crc kubenswrapper[5043]: E1125 07:16:28.867437 5043 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:16:28 crc kubenswrapper[5043]: E1125 07:16:28.867554 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 07:17:00.867527154 +0000 UTC m=+85.035722905 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.875829 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.875893 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.875909 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.875932 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.875949 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:28Z","lastTransitionTime":"2025-11-25T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.961943 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.961979 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.961994 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:28 crc kubenswrapper[5043]: E1125 07:16:28.962091 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:28 crc kubenswrapper[5043]: E1125 07:16:28.962316 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:28 crc kubenswrapper[5043]: E1125 07:16:28.962449 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.979532 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.979858 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.979957 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.980075 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:28 crc kubenswrapper[5043]: I1125 07:16:28.980171 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:28Z","lastTransitionTime":"2025-11-25T07:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.083530 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.083597 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.083654 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.083686 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.083745 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:29Z","lastTransitionTime":"2025-11-25T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.186505 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.186925 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.187044 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.187165 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.187231 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:29Z","lastTransitionTime":"2025-11-25T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.290113 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.290159 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.290170 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.290186 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.290198 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:29Z","lastTransitionTime":"2025-11-25T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.393369 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.393427 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.393444 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.393470 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.393489 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:29Z","lastTransitionTime":"2025-11-25T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.496974 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.497638 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.497766 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.497872 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.497985 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:29Z","lastTransitionTime":"2025-11-25T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.600450 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.600714 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.600785 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.600856 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.600922 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:29Z","lastTransitionTime":"2025-11-25T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.702841 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.703112 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.703187 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.703266 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.703327 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:29Z","lastTransitionTime":"2025-11-25T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.806374 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.806418 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.806428 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.806446 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.806457 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:29Z","lastTransitionTime":"2025-11-25T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.909360 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.909408 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.909418 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.909433 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.909448 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:29Z","lastTransitionTime":"2025-11-25T07:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.961895 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:29 crc kubenswrapper[5043]: E1125 07:16:29.962417 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.962937 5043 scope.go:117] "RemoveContainer" containerID="76f06fde9cb69633c51800c9c3698402db4f9024b9b033be018380c90bd10d53" Nov 25 07:16:29 crc kubenswrapper[5043]: I1125 07:16:29.983346 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.001029 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:29Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.012443 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.012483 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.012497 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.012517 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.012529 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:30Z","lastTransitionTime":"2025-11-25T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.016876 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.035344 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.051323 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.072274 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.097214 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.113545 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.116247 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.116276 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.116287 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.116304 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.116317 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:30Z","lastTransitionTime":"2025-11-25T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.132896 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.147418 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.162324 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.181738 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f06fde9cb69633c51800c9c3698402db4f9024b9b033be018380c90bd10d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76f06fde9cb69633c51800c9c3698402db4f9024b9b033be018380c90bd10d53\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:18Z\\\",\\\"message\\\":\\\"d: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z]\\\\nI1125 07:16:18.106906 6602 services_controller.go:360] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes for network=default : 80.242µs\\\\nI1125 07:16:18.106838 6602 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nI1125 07:16:18.106911 6602 services_controller.go:443] Built service openshift-machine-api/machine-api-controllers LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8441, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.197619 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed17eb56-5921-4618-8de7-166c01019089\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1448111cb3e3b27389baafd33293fcb690b89e0f54007afba41778c91cb8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2869a4db622ee8d96a52f7c058914b01302bbeac8b81ed67aa9c87f77a7f7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5604ac4dec090e082d1843bc46f0857aad493c97e1d91208a938e7405333a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.212203 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.218912 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.218956 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.218966 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.218983 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.218996 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:30Z","lastTransitionTime":"2025-11-25T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.228918 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.246185 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.268408 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.285169 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.322790 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.322838 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.322851 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.322871 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.322883 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:30Z","lastTransitionTime":"2025-11-25T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.344036 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovnkube-controller/1.log" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.347076 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerStarted","Data":"7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7"} Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.347597 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.363107 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.380753 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.406776 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.425280 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.425313 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.425323 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.425336 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.425346 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:30Z","lastTransitionTime":"2025-11-25T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.431533 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.455989 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.469717 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.484142 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.497109 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.517009 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76f06fde9cb69633c51800c9c3698402db4f9024b9b033be018380c90bd10d53\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:18Z\\\",\\\"message\\\":\\\"d: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z]\\\\nI1125 07:16:18.106906 6602 services_controller.go:360] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes for network=default : 80.242µs\\\\nI1125 07:16:18.106838 6602 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nI1125 07:16:18.106911 6602 services_controller.go:443] Built service openshift-machine-api/machine-api-controllers LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8441, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.526959 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.526996 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.527005 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.527018 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.527027 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:30Z","lastTransitionTime":"2025-11-25T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.531207 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed17eb56-5921-4618-8de7-166c01019089\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1448111cb3e3b27389baafd33293fcb690b89e0f54007afba41778c91cb8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2869a4db622ee8d96a52f7c058914b01302bbeac8b81ed67aa9c87f77a7f7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5604ac4dec090e082d1843bc46f0857aad493c97e1d91208a938e7405333a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.546099 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.561066 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.574033 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.586383 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.597947 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.610232 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.622580 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.629005 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.629044 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.629054 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.629069 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.629082 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:30Z","lastTransitionTime":"2025-11-25T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.634358 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:30Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.731534 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.731563 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.731570 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.731582 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.731591 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:30Z","lastTransitionTime":"2025-11-25T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.834410 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.834466 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.834482 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.834505 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.834521 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:30Z","lastTransitionTime":"2025-11-25T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.936846 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.936892 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.936904 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.936923 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.936936 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:30Z","lastTransitionTime":"2025-11-25T07:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.962339 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.962362 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:30 crc kubenswrapper[5043]: I1125 07:16:30.962364 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:30 crc kubenswrapper[5043]: E1125 07:16:30.962468 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:30 crc kubenswrapper[5043]: E1125 07:16:30.962550 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:30 crc kubenswrapper[5043]: E1125 07:16:30.962682 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.039969 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.040021 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.040036 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.040056 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.040074 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:31Z","lastTransitionTime":"2025-11-25T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.138258 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.142318 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.142353 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.142365 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.142380 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.142390 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:31Z","lastTransitionTime":"2025-11-25T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.154696 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed17eb56-5921-4618-8de7-166c01019089\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1448111cb3e3b27389baafd33293fcb690b89e0f54007afba41778c91cb8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2869a4db622ee8d96a52f7c058914b01302bbeac8b81ed67aa9c87f77a7f7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5604ac4dec090e082d1843bc46f0857aad493c97e1d91208a938e7405333a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.166968 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.177962 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.191081 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.203407 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.220680 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76f06fde9cb69633c51800c9c3698402db4f9024b9b033be018380c90bd10d53\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:18Z\\\",\\\"message\\\":\\\"d: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z]\\\\nI1125 07:16:18.106906 6602 services_controller.go:360] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes for network=default : 80.242µs\\\\nI1125 07:16:18.106838 6602 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nI1125 07:16:18.106911 6602 services_controller.go:443] Built service openshift-machine-api/machine-api-controllers LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8441, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.239093 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.246978 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.247196 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.247243 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.247265 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.247287 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:31Z","lastTransitionTime":"2025-11-25T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.255505 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.269348 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.286118 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.297687 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.331051 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.349557 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.349592 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.349622 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.349638 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.349647 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:31Z","lastTransitionTime":"2025-11-25T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.350859 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.353099 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovnkube-controller/2.log" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.353978 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovnkube-controller/1.log" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.357107 5043 generic.go:334] "Generic (PLEG): container finished" podID="a8785a4c-82ff-4a78-83a0-463e977df530" containerID="7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7" exitCode=1 Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.357163 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerDied","Data":"7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7"} Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.357211 5043 scope.go:117] "RemoveContainer" containerID="76f06fde9cb69633c51800c9c3698402db4f9024b9b033be018380c90bd10d53" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.358404 5043 scope.go:117] "RemoveContainer" containerID="7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7" Nov 25 07:16:31 crc kubenswrapper[5043]: E1125 07:16:31.358705 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.371817 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.387175 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.402327 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.415699 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.435391 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.446297 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.451591 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.451690 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.451711 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.451738 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.451759 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:31Z","lastTransitionTime":"2025-11-25T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.462559 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.475582 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.490028 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.501735 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.516111 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.528867 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.541727 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.553364 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.554241 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.554267 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.554277 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.554293 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.554304 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:31Z","lastTransitionTime":"2025-11-25T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.565174 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.579195 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.596495 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.609702 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.626098 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.644774 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.656388 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.656431 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.656443 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.656459 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.656471 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:31Z","lastTransitionTime":"2025-11-25T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.664070 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.688496 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76f06fde9cb69633c51800c9c3698402db4f9024b9b033be018380c90bd10d53\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:18Z\\\",\\\"message\\\":\\\"d: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:18Z is after 2025-08-24T17:21:41Z]\\\\nI1125 07:16:18.106906 6602 services_controller.go:360] Finished syncing service ovn-kubernetes-node on namespace openshift-ovn-kubernetes for network=default : 80.242µs\\\\nI1125 07:16:18.106838 6602 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nI1125 07:16:18.106911 6602 services_controller.go:443] Built service openshift-machine-api/machine-api-controllers LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8441, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:30Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 07:16:30.855199 6759 obj_retry.go:551] Creating *factory.egressNode crc took: 2.291452ms\\\\nI1125 07:16:30.855237 6759 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 07:16:30.855283 6759 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 07:16:30.855283 6759 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 07:16:30.855300 6759 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 07:16:30.855320 6759 factory.go:656] Stopping watch factory\\\\nI1125 07:16:30.855341 6759 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 07:16:30.855357 6759 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 07:16:30.855674 6759 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 07:16:30.855789 6759 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 07:16:30.855831 6759 ovnkube.go:599] Stopped ovnkube\\\\nI1125 07:16:30.855858 6759 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 07:16:30.855958 6759 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.700638 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed17eb56-5921-4618-8de7-166c01019089\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1448111cb3e3b27389baafd33293fcb690b89e0f54007afba41778c91cb8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2869a4db622ee8d96a52f7c058914b01302bbeac8b81ed67aa9c87f77a7f7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5604ac4dec090e082d1843bc46f0857aad493c97e1d91208a938e7405333a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:31Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.760320 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.760412 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.760442 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.760473 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.760495 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:31Z","lastTransitionTime":"2025-11-25T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.863898 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.863971 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.863988 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.864014 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.864031 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:31Z","lastTransitionTime":"2025-11-25T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.962677 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:31 crc kubenswrapper[5043]: E1125 07:16:31.962885 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.967779 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.967869 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.967894 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.967920 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:31 crc kubenswrapper[5043]: I1125 07:16:31.968092 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:31Z","lastTransitionTime":"2025-11-25T07:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.071094 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.071147 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.071164 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.071187 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.071206 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:32Z","lastTransitionTime":"2025-11-25T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.174314 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.174392 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.174416 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.174447 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.174472 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:32Z","lastTransitionTime":"2025-11-25T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.277997 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.278078 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.278096 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.278123 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.278145 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:32Z","lastTransitionTime":"2025-11-25T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.365287 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovnkube-controller/2.log" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.371563 5043 scope.go:117] "RemoveContainer" containerID="7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7" Nov 25 07:16:32 crc kubenswrapper[5043]: E1125 07:16:32.371856 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.381032 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.381099 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.381123 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.381152 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.381174 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:32Z","lastTransitionTime":"2025-11-25T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.390913 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed17eb56-5921-4618-8de7-166c01019089\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1448111cb3e3b27389baafd33293fcb690b89e0f54007afba41778c91cb8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2869a4db622ee8d96a52f7c058914b01302bbeac8b81ed67aa9c87f77a7f7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5604ac4dec090e082d1843bc46f0857aad493c97e1d91208a938e7405333a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.411277 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.429365 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.445434 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.461950 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.483742 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.483792 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.483803 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.483817 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.483826 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:32Z","lastTransitionTime":"2025-11-25T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.493013 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:30Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 07:16:30.855199 6759 obj_retry.go:551] Creating *factory.egressNode crc took: 2.291452ms\\\\nI1125 07:16:30.855237 6759 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 07:16:30.855283 6759 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 07:16:30.855283 6759 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 07:16:30.855300 6759 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 07:16:30.855320 6759 factory.go:656] Stopping watch factory\\\\nI1125 07:16:30.855341 6759 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 07:16:30.855357 6759 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 07:16:30.855674 6759 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 07:16:30.855789 6759 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 07:16:30.855831 6759 ovnkube.go:599] Stopped ovnkube\\\\nI1125 07:16:30.855858 6759 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 07:16:30.855958 6759 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.507784 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.524206 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.540448 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.556467 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.570796 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.586723 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.586754 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.586764 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.586776 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.586785 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:32Z","lastTransitionTime":"2025-11-25T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.590825 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.625295 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.641390 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.654536 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.666745 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.678208 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.688851 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.688884 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.688894 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.688909 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.688920 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:32Z","lastTransitionTime":"2025-11-25T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.689023 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:32Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.792028 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.792078 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.792093 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.792111 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.792123 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:32Z","lastTransitionTime":"2025-11-25T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.895819 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.895888 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.895907 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.895937 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.895960 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:32Z","lastTransitionTime":"2025-11-25T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.961968 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.962034 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.962029 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:32 crc kubenswrapper[5043]: E1125 07:16:32.962102 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:32 crc kubenswrapper[5043]: E1125 07:16:32.962211 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:32 crc kubenswrapper[5043]: E1125 07:16:32.962340 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.998522 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.998645 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.998657 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.998673 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:32 crc kubenswrapper[5043]: I1125 07:16:32.998685 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:32Z","lastTransitionTime":"2025-11-25T07:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.102475 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.102544 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.102562 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.102589 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.102636 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:33Z","lastTransitionTime":"2025-11-25T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.205722 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.205785 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.205802 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.205825 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.205845 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:33Z","lastTransitionTime":"2025-11-25T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.308716 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.308779 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.308798 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.308822 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.308841 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:33Z","lastTransitionTime":"2025-11-25T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.418111 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.418201 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.418229 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.418260 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.418282 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:33Z","lastTransitionTime":"2025-11-25T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.518215 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs\") pod \"network-metrics-daemon-xqj4m\" (UID: \"e26eab68-d56e-4c83-9888-0a866e549524\") " pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:33 crc kubenswrapper[5043]: E1125 07:16:33.518443 5043 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 07:16:33 crc kubenswrapper[5043]: E1125 07:16:33.518561 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs podName:e26eab68-d56e-4c83-9888-0a866e549524 nodeName:}" failed. No retries permitted until 2025-11-25 07:16:49.518530875 +0000 UTC m=+73.686726626 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs") pod "network-metrics-daemon-xqj4m" (UID: "e26eab68-d56e-4c83-9888-0a866e549524") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.521729 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.521799 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.521834 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.521864 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.521886 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:33Z","lastTransitionTime":"2025-11-25T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.625821 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.625849 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.625857 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.625871 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.625879 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:33Z","lastTransitionTime":"2025-11-25T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.729221 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.729263 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.729278 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.729299 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.729313 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:33Z","lastTransitionTime":"2025-11-25T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.790988 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.791060 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.791079 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.791104 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.791122 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:33Z","lastTransitionTime":"2025-11-25T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:33 crc kubenswrapper[5043]: E1125 07:16:33.810455 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:33Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.814641 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.814687 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.814702 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.814720 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.814733 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:33Z","lastTransitionTime":"2025-11-25T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:33 crc kubenswrapper[5043]: E1125 07:16:33.834285 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:33Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.841788 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.841818 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.841828 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.841841 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.841849 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:33Z","lastTransitionTime":"2025-11-25T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:33 crc kubenswrapper[5043]: E1125 07:16:33.861896 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:33Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.866795 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.866829 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.866837 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.866848 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.866857 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:33Z","lastTransitionTime":"2025-11-25T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:33 crc kubenswrapper[5043]: E1125 07:16:33.885814 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:33Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.890174 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.890232 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.890247 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.890269 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.890286 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:33Z","lastTransitionTime":"2025-11-25T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:33 crc kubenswrapper[5043]: E1125 07:16:33.909740 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:33Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:33 crc kubenswrapper[5043]: E1125 07:16:33.909967 5043 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.911967 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.912008 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.912038 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.912055 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.912068 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:33Z","lastTransitionTime":"2025-11-25T07:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:33 crc kubenswrapper[5043]: I1125 07:16:33.962483 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:33 crc kubenswrapper[5043]: E1125 07:16:33.962652 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.015867 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.015932 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.015952 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.015977 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.015995 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:34Z","lastTransitionTime":"2025-11-25T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.119702 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.119762 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.119779 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.119803 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.119821 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:34Z","lastTransitionTime":"2025-11-25T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.222297 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.222351 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.222370 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.222393 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.222410 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:34Z","lastTransitionTime":"2025-11-25T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.325337 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.325395 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.325413 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.325436 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.325453 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:34Z","lastTransitionTime":"2025-11-25T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.427815 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.427868 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.427883 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.427911 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.427930 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:34Z","lastTransitionTime":"2025-11-25T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.530109 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.530184 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.530208 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.530241 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.530263 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:34Z","lastTransitionTime":"2025-11-25T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.633131 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.633204 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.633221 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.633250 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.633282 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:34Z","lastTransitionTime":"2025-11-25T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.736161 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.736248 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.736273 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.736295 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.736314 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:34Z","lastTransitionTime":"2025-11-25T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.839792 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.839860 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.839878 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.839905 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.839921 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:34Z","lastTransitionTime":"2025-11-25T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.943246 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.943310 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.943327 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.943346 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.943360 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:34Z","lastTransitionTime":"2025-11-25T07:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.962146 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.962208 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:34 crc kubenswrapper[5043]: E1125 07:16:34.962293 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:34 crc kubenswrapper[5043]: I1125 07:16:34.962407 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:34 crc kubenswrapper[5043]: E1125 07:16:34.962497 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:34 crc kubenswrapper[5043]: E1125 07:16:34.962570 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.046395 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.046459 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.046477 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.046503 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.046520 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:35Z","lastTransitionTime":"2025-11-25T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.149879 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.149931 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.149940 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.149955 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.149963 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:35Z","lastTransitionTime":"2025-11-25T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.252563 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.252647 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.252659 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.252675 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.252686 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:35Z","lastTransitionTime":"2025-11-25T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.355183 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.355230 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.355241 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.355259 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.355272 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:35Z","lastTransitionTime":"2025-11-25T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.459338 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.459405 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.459413 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.459429 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.459438 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:35Z","lastTransitionTime":"2025-11-25T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.561835 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.562184 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.562227 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.562256 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.562278 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:35Z","lastTransitionTime":"2025-11-25T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.664798 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.664870 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.664887 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.665386 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.665457 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:35Z","lastTransitionTime":"2025-11-25T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.768100 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.768151 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.768167 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.768194 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.768212 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:35Z","lastTransitionTime":"2025-11-25T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.870855 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.870904 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.870923 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.870947 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.870965 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:35Z","lastTransitionTime":"2025-11-25T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.961924 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:35 crc kubenswrapper[5043]: E1125 07:16:35.962126 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.974659 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.974731 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.974784 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.974813 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:35 crc kubenswrapper[5043]: I1125 07:16:35.974835 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:35Z","lastTransitionTime":"2025-11-25T07:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.077833 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.077892 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.077914 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.077944 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.077965 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:36Z","lastTransitionTime":"2025-11-25T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.180826 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.180901 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.180918 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.180940 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.180957 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:36Z","lastTransitionTime":"2025-11-25T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.284419 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.284498 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.284525 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.284557 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.284579 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:36Z","lastTransitionTime":"2025-11-25T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.387490 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.387594 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.387986 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.388026 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.388107 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:36Z","lastTransitionTime":"2025-11-25T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.492226 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.492293 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.492316 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.492340 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.492360 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:36Z","lastTransitionTime":"2025-11-25T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.595689 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.595762 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.595782 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.595805 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.595825 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:36Z","lastTransitionTime":"2025-11-25T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.698745 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.698810 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.698828 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.698853 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.698870 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:36Z","lastTransitionTime":"2025-11-25T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.801999 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.802055 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.802071 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.802092 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.802109 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:36Z","lastTransitionTime":"2025-11-25T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.905234 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.905289 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.905307 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.905331 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.905349 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:36Z","lastTransitionTime":"2025-11-25T07:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.961787 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.961831 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.961869 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:36 crc kubenswrapper[5043]: E1125 07:16:36.963849 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:36 crc kubenswrapper[5043]: E1125 07:16:36.963930 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:36 crc kubenswrapper[5043]: E1125 07:16:36.964037 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:36 crc kubenswrapper[5043]: I1125 07:16:36.999048 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:30Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 07:16:30.855199 6759 obj_retry.go:551] Creating *factory.egressNode crc took: 2.291452ms\\\\nI1125 07:16:30.855237 6759 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 07:16:30.855283 6759 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 07:16:30.855283 6759 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 07:16:30.855300 6759 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 07:16:30.855320 6759 factory.go:656] Stopping watch factory\\\\nI1125 07:16:30.855341 6759 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 07:16:30.855357 6759 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 07:16:30.855674 6759 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 07:16:30.855789 6759 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 07:16:30.855831 6759 ovnkube.go:599] Stopped ovnkube\\\\nI1125 07:16:30.855858 6759 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 07:16:30.855958 6759 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:36Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.013083 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.013156 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.013179 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.013213 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.013400 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:37Z","lastTransitionTime":"2025-11-25T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.019260 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed17eb56-5921-4618-8de7-166c01019089\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1448111cb3e3b27389baafd33293fcb690b89e0f54007afba41778c91cb8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2869a4db622ee8d96a52f7c058914b01302bbeac8b81ed67aa9c87f77a7f7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5604ac4dec090e082d1843bc46f0857aad493c97e1d91208a938e7405333a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.036625 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.055395 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.072121 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.086173 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.100250 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.111516 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.115864 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.115893 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.115901 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.115916 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.115928 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:37Z","lastTransitionTime":"2025-11-25T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.122500 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.132286 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.140723 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.149990 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.163526 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.182778 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.198496 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.214503 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.218328 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.218360 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.218370 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.218385 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.218397 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:37Z","lastTransitionTime":"2025-11-25T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.229664 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.239744 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:37Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.320892 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.320932 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.320944 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.320962 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.320974 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:37Z","lastTransitionTime":"2025-11-25T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.423375 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.423412 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.423426 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.423446 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.423460 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:37Z","lastTransitionTime":"2025-11-25T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.526658 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.526727 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.526737 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.526751 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.526762 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:37Z","lastTransitionTime":"2025-11-25T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.629376 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.629413 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.629449 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.629466 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.629477 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:37Z","lastTransitionTime":"2025-11-25T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.733213 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.733273 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.733290 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.733313 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.733330 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:37Z","lastTransitionTime":"2025-11-25T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.835898 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.835933 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.835945 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.835971 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.835982 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:37Z","lastTransitionTime":"2025-11-25T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.938512 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.938545 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.938557 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.938572 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.938585 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:37Z","lastTransitionTime":"2025-11-25T07:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:37 crc kubenswrapper[5043]: I1125 07:16:37.962578 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:37 crc kubenswrapper[5043]: E1125 07:16:37.962845 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.041794 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.041843 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.041890 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.041911 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.041926 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:38Z","lastTransitionTime":"2025-11-25T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.145303 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.145368 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.145387 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.145413 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.145431 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:38Z","lastTransitionTime":"2025-11-25T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.248159 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.248223 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.248242 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.248267 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.248285 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:38Z","lastTransitionTime":"2025-11-25T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.351207 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.351255 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.351265 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.351283 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.351296 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:38Z","lastTransitionTime":"2025-11-25T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.453395 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.453454 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.453471 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.453502 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.453519 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:38Z","lastTransitionTime":"2025-11-25T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.556900 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.556974 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.556992 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.557015 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.557031 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:38Z","lastTransitionTime":"2025-11-25T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.661694 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.661765 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.661791 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.661821 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.661847 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:38Z","lastTransitionTime":"2025-11-25T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.764740 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.764804 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.764820 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.764844 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.764861 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:38Z","lastTransitionTime":"2025-11-25T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.868007 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.868072 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.868082 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.868113 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.868126 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:38Z","lastTransitionTime":"2025-11-25T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.962546 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.962673 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.962556 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:38 crc kubenswrapper[5043]: E1125 07:16:38.962812 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:38 crc kubenswrapper[5043]: E1125 07:16:38.962928 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:38 crc kubenswrapper[5043]: E1125 07:16:38.963063 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.970858 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.970908 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.970931 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.970960 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:38 crc kubenswrapper[5043]: I1125 07:16:38.970982 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:38Z","lastTransitionTime":"2025-11-25T07:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.073686 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.073752 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.073774 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.073803 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.073822 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:39Z","lastTransitionTime":"2025-11-25T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.177137 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.177208 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.177224 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.177251 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.177272 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:39Z","lastTransitionTime":"2025-11-25T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.280146 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.280199 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.280211 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.280229 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.280239 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:39Z","lastTransitionTime":"2025-11-25T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.382465 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.382502 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.382510 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.382525 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.382534 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:39Z","lastTransitionTime":"2025-11-25T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.485079 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.485126 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.485142 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.485165 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.485183 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:39Z","lastTransitionTime":"2025-11-25T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.588189 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.588255 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.588278 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.588308 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.588332 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:39Z","lastTransitionTime":"2025-11-25T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.691197 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.691234 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.691244 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.691258 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.691268 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:39Z","lastTransitionTime":"2025-11-25T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.793347 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.793416 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.793434 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.793458 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.793477 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:39Z","lastTransitionTime":"2025-11-25T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.897126 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.897186 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.897204 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.897227 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.897244 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:39Z","lastTransitionTime":"2025-11-25T07:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:39 crc kubenswrapper[5043]: I1125 07:16:39.961949 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:39 crc kubenswrapper[5043]: E1125 07:16:39.962167 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.000208 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.000249 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.000261 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.000278 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.000290 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:40Z","lastTransitionTime":"2025-11-25T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.103036 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.103084 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.103100 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.103122 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.103139 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:40Z","lastTransitionTime":"2025-11-25T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.206533 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.206644 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.206668 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.206697 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.206717 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:40Z","lastTransitionTime":"2025-11-25T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.310425 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.310468 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.310479 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.310495 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.310506 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:40Z","lastTransitionTime":"2025-11-25T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.413589 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.413692 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.413712 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.413735 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.413753 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:40Z","lastTransitionTime":"2025-11-25T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.516725 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.516785 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.516801 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.516838 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.516874 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:40Z","lastTransitionTime":"2025-11-25T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.619980 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.620040 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.620064 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.620087 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.620101 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:40Z","lastTransitionTime":"2025-11-25T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.722678 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.722764 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.722788 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.722821 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.722872 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:40Z","lastTransitionTime":"2025-11-25T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.825807 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.825883 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.825914 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.825958 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.825984 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:40Z","lastTransitionTime":"2025-11-25T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.928113 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.928145 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.928155 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.928170 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.928183 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:40Z","lastTransitionTime":"2025-11-25T07:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.962684 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.962716 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:40 crc kubenswrapper[5043]: I1125 07:16:40.962684 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:40 crc kubenswrapper[5043]: E1125 07:16:40.962816 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:40 crc kubenswrapper[5043]: E1125 07:16:40.962906 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:40 crc kubenswrapper[5043]: E1125 07:16:40.963067 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.030364 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.030397 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.030407 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.030423 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.030433 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:41Z","lastTransitionTime":"2025-11-25T07:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.132427 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.132473 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.132481 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.132494 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.132504 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:41Z","lastTransitionTime":"2025-11-25T07:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.235413 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.235461 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.235472 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.235490 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.235503 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:41Z","lastTransitionTime":"2025-11-25T07:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.337238 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.337296 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.337314 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.337338 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.337381 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:41Z","lastTransitionTime":"2025-11-25T07:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.440331 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.440365 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.440374 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.440386 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.440396 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:41Z","lastTransitionTime":"2025-11-25T07:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.542771 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.542820 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.542835 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.542853 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.542869 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:41Z","lastTransitionTime":"2025-11-25T07:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.645037 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.645084 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.645096 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.645112 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.645122 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:41Z","lastTransitionTime":"2025-11-25T07:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.747316 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.747356 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.747368 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.747384 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.747396 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:41Z","lastTransitionTime":"2025-11-25T07:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.850445 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.850485 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.850494 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.850509 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.850519 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:41Z","lastTransitionTime":"2025-11-25T07:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.952855 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.952902 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.952913 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.952927 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.952936 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:41Z","lastTransitionTime":"2025-11-25T07:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:41 crc kubenswrapper[5043]: I1125 07:16:41.962287 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:41 crc kubenswrapper[5043]: E1125 07:16:41.962401 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.055732 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.055772 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.055785 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.055803 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.055816 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:42Z","lastTransitionTime":"2025-11-25T07:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.157989 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.158021 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.158032 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.158046 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.158056 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:42Z","lastTransitionTime":"2025-11-25T07:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.261018 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.261092 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.261108 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.261132 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.261149 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:42Z","lastTransitionTime":"2025-11-25T07:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.363981 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.364040 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.364057 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.364084 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.364104 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:42Z","lastTransitionTime":"2025-11-25T07:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.466092 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.466125 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.466134 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.466149 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.466158 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:42Z","lastTransitionTime":"2025-11-25T07:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.569265 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.569331 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.569355 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.569386 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.569410 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:42Z","lastTransitionTime":"2025-11-25T07:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.672227 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.672273 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.672287 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.672305 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.672318 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:42Z","lastTransitionTime":"2025-11-25T07:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.774780 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.774824 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.774833 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.774850 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.774860 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:42Z","lastTransitionTime":"2025-11-25T07:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.877305 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.877352 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.877368 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.877385 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.877397 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:42Z","lastTransitionTime":"2025-11-25T07:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.962401 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.962463 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:42 crc kubenswrapper[5043]: E1125 07:16:42.962554 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.962595 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:42 crc kubenswrapper[5043]: E1125 07:16:42.962762 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:42 crc kubenswrapper[5043]: E1125 07:16:42.962837 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.980229 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.980266 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.980275 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.980291 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:42 crc kubenswrapper[5043]: I1125 07:16:42.980304 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:42Z","lastTransitionTime":"2025-11-25T07:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.082222 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.082266 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.082275 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.082296 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.082305 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:43Z","lastTransitionTime":"2025-11-25T07:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.184301 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.184359 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.184429 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.184452 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.184462 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:43Z","lastTransitionTime":"2025-11-25T07:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.286589 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.286634 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.286642 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.286655 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.286663 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:43Z","lastTransitionTime":"2025-11-25T07:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.388966 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.388999 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.389011 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.389028 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.389043 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:43Z","lastTransitionTime":"2025-11-25T07:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.491530 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.491576 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.491588 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.491630 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.491643 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:43Z","lastTransitionTime":"2025-11-25T07:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.593950 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.594002 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.594024 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.594050 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.594066 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:43Z","lastTransitionTime":"2025-11-25T07:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.696307 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.696342 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.696352 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.696368 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.696380 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:43Z","lastTransitionTime":"2025-11-25T07:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.798739 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.798799 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.798817 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.798839 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.798857 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:43Z","lastTransitionTime":"2025-11-25T07:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.901631 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.901670 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.901679 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.901694 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.901704 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:43Z","lastTransitionTime":"2025-11-25T07:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:43 crc kubenswrapper[5043]: I1125 07:16:43.962317 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:43 crc kubenswrapper[5043]: E1125 07:16:43.962434 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.003444 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.003488 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.003501 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.003519 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.003530 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:44Z","lastTransitionTime":"2025-11-25T07:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.108072 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.108117 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.108129 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.108146 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.108158 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:44Z","lastTransitionTime":"2025-11-25T07:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.211288 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.211339 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.211365 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.211383 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.211401 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:44Z","lastTransitionTime":"2025-11-25T07:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.218452 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.218497 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.218510 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.218526 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.218539 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:44Z","lastTransitionTime":"2025-11-25T07:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:44 crc kubenswrapper[5043]: E1125 07:16:44.230818 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:44Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.235477 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.235516 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.235526 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.235540 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.235552 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:44Z","lastTransitionTime":"2025-11-25T07:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:44 crc kubenswrapper[5043]: E1125 07:16:44.246471 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:44Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.267141 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.267431 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.267513 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.267705 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.267796 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:44Z","lastTransitionTime":"2025-11-25T07:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:44 crc kubenswrapper[5043]: E1125 07:16:44.287630 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:44Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.294465 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.294728 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.294819 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.294885 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.294939 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:44Z","lastTransitionTime":"2025-11-25T07:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:44 crc kubenswrapper[5043]: E1125 07:16:44.309432 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:44Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.312701 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.312804 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.312877 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.312946 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.313007 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:44Z","lastTransitionTime":"2025-11-25T07:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:44 crc kubenswrapper[5043]: E1125 07:16:44.324263 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:44Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:44 crc kubenswrapper[5043]: E1125 07:16:44.324423 5043 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.325902 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.326010 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.326086 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.326154 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.326227 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:44Z","lastTransitionTime":"2025-11-25T07:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.427867 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.427898 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.427908 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.427923 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.427933 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:44Z","lastTransitionTime":"2025-11-25T07:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.529789 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.529822 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.529832 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.529846 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.529859 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:44Z","lastTransitionTime":"2025-11-25T07:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.632711 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.632762 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.632783 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.632811 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.632832 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:44Z","lastTransitionTime":"2025-11-25T07:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.735249 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.735542 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.735660 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.735728 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.735793 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:44Z","lastTransitionTime":"2025-11-25T07:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.838317 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.838367 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.838383 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.838405 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.838421 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:44Z","lastTransitionTime":"2025-11-25T07:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.940918 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.940954 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.940963 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.940977 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.940986 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:44Z","lastTransitionTime":"2025-11-25T07:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.962339 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.962471 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:44 crc kubenswrapper[5043]: I1125 07:16:44.962734 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:44 crc kubenswrapper[5043]: E1125 07:16:44.962726 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:44 crc kubenswrapper[5043]: E1125 07:16:44.962864 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:44 crc kubenswrapper[5043]: E1125 07:16:44.962985 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.043400 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.043448 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.043457 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.043473 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.043483 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:45Z","lastTransitionTime":"2025-11-25T07:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.145743 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.145781 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.145793 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.145810 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.145821 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:45Z","lastTransitionTime":"2025-11-25T07:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.247925 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.247955 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.247963 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.247977 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.247985 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:45Z","lastTransitionTime":"2025-11-25T07:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.349937 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.349992 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.350005 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.350018 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.350045 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:45Z","lastTransitionTime":"2025-11-25T07:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.452010 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.452057 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.452071 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.452093 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.452107 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:45Z","lastTransitionTime":"2025-11-25T07:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.554831 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.555064 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.555160 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.555292 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.555427 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:45Z","lastTransitionTime":"2025-11-25T07:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.658171 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.658240 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.658258 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.658284 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.658301 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:45Z","lastTransitionTime":"2025-11-25T07:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.762139 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.762208 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.762227 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.762253 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.762274 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:45Z","lastTransitionTime":"2025-11-25T07:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.865202 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.865800 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.865892 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.865966 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.866028 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:45Z","lastTransitionTime":"2025-11-25T07:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.962584 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:45 crc kubenswrapper[5043]: E1125 07:16:45.962807 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.969225 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.969261 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.969273 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.969289 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:45 crc kubenswrapper[5043]: I1125 07:16:45.969299 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:45Z","lastTransitionTime":"2025-11-25T07:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.071508 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.071552 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.071562 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.071577 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.071586 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:46Z","lastTransitionTime":"2025-11-25T07:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.174025 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.174070 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.174080 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.174095 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.174107 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:46Z","lastTransitionTime":"2025-11-25T07:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.276390 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.276432 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.276444 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.276459 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.276469 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:46Z","lastTransitionTime":"2025-11-25T07:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.379151 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.379198 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.379217 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.379245 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.379266 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:46Z","lastTransitionTime":"2025-11-25T07:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.481470 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.481525 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.481536 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.481556 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.481571 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:46Z","lastTransitionTime":"2025-11-25T07:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.584133 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.584184 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.584195 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.584214 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.584227 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:46Z","lastTransitionTime":"2025-11-25T07:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.687040 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.687100 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.687118 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.687142 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.687159 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:46Z","lastTransitionTime":"2025-11-25T07:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.789496 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.789543 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.789553 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.789568 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.789579 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:46Z","lastTransitionTime":"2025-11-25T07:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.892252 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.892323 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.892361 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.892378 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.892389 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:46Z","lastTransitionTime":"2025-11-25T07:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.961987 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.962063 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.962194 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:46 crc kubenswrapper[5043]: E1125 07:16:46.962211 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:46 crc kubenswrapper[5043]: E1125 07:16:46.962274 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.962831 5043 scope.go:117] "RemoveContainer" containerID="7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7" Nov 25 07:16:46 crc kubenswrapper[5043]: E1125 07:16:46.963032 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" Nov 25 07:16:46 crc kubenswrapper[5043]: E1125 07:16:46.963314 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.975201 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.981037 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed17eb56-5921-4618-8de7-166c01019089\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1448111cb3e3b27389baafd33293fcb690b89e0f54007afba41778c91cb8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2869a4db622ee8d96a52f7c058914b01302bbeac8b81ed67aa9c87f77a7f7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5604ac4dec090e082d1843bc46f0857aad493c97e1d91208a938e7405333a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:46Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.996242 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:46Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.997256 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.997322 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.997343 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.997366 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:46 crc kubenswrapper[5043]: I1125 07:16:46.997385 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:46Z","lastTransitionTime":"2025-11-25T07:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.010427 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:47Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.025030 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:47Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.038554 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:47Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.062477 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:30Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 07:16:30.855199 6759 obj_retry.go:551] Creating *factory.egressNode crc took: 2.291452ms\\\\nI1125 07:16:30.855237 6759 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 07:16:30.855283 6759 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 07:16:30.855283 6759 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 07:16:30.855300 6759 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 07:16:30.855320 6759 factory.go:656] Stopping watch factory\\\\nI1125 07:16:30.855341 6759 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 07:16:30.855357 6759 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 07:16:30.855674 6759 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 07:16:30.855789 6759 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 07:16:30.855831 6759 ovnkube.go:599] Stopped ovnkube\\\\nI1125 07:16:30.855858 6759 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 07:16:30.855958 6759 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:47Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.075545 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:47Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.086592 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:47Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.097587 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:47Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.099111 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.099153 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.099164 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.099178 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.099187 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:47Z","lastTransitionTime":"2025-11-25T07:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.109559 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:47Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.121309 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:47Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.138870 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:47Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.155412 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:47Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.170279 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:47Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.189053 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:47Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.202028 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.202069 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.202081 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.202104 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.202116 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:47Z","lastTransitionTime":"2025-11-25T07:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.208061 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:47Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.222119 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:47Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.237825 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:47Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.310901 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.310969 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.310982 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.311003 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.311021 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:47Z","lastTransitionTime":"2025-11-25T07:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.413941 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.414285 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.414416 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.414578 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.414801 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:47Z","lastTransitionTime":"2025-11-25T07:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.518081 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.518124 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.518136 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.518152 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.518163 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:47Z","lastTransitionTime":"2025-11-25T07:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.620993 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.621043 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.621054 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.621077 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.621088 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:47Z","lastTransitionTime":"2025-11-25T07:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.723884 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.723932 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.723943 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.723961 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.723973 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:47Z","lastTransitionTime":"2025-11-25T07:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.825949 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.825985 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.825994 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.826009 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.826019 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:47Z","lastTransitionTime":"2025-11-25T07:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.927826 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.927859 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.927867 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.927879 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.927888 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:47Z","lastTransitionTime":"2025-11-25T07:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:47 crc kubenswrapper[5043]: I1125 07:16:47.961675 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:47 crc kubenswrapper[5043]: E1125 07:16:47.961813 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.030994 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.031052 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.031063 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.031079 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.031088 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:48Z","lastTransitionTime":"2025-11-25T07:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.134392 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.134431 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.134443 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.134459 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.134469 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:48Z","lastTransitionTime":"2025-11-25T07:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.238254 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.238300 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.238310 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.238326 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.238336 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:48Z","lastTransitionTime":"2025-11-25T07:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.341043 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.341120 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.341133 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.341156 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.341168 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:48Z","lastTransitionTime":"2025-11-25T07:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.444322 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.444408 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.444428 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.444451 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.444467 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:48Z","lastTransitionTime":"2025-11-25T07:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.547902 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.548003 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.548024 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.548048 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.548065 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:48Z","lastTransitionTime":"2025-11-25T07:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.651897 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.651995 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.652014 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.652043 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.652062 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:48Z","lastTransitionTime":"2025-11-25T07:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.755212 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.755264 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.755280 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.755300 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.755316 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:48Z","lastTransitionTime":"2025-11-25T07:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.857592 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.857643 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.857653 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.857667 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.857676 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:48Z","lastTransitionTime":"2025-11-25T07:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.960432 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.960477 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.960488 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.960501 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.960511 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:48Z","lastTransitionTime":"2025-11-25T07:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.961827 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:48 crc kubenswrapper[5043]: E1125 07:16:48.961931 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.961966 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:48 crc kubenswrapper[5043]: I1125 07:16:48.961985 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:48 crc kubenswrapper[5043]: E1125 07:16:48.962022 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:48 crc kubenswrapper[5043]: E1125 07:16:48.962221 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.062996 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.063035 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.063044 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.063062 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.063073 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:49Z","lastTransitionTime":"2025-11-25T07:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.165802 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.166201 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.166393 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.166595 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.166903 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:49Z","lastTransitionTime":"2025-11-25T07:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.270195 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.270484 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.270566 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.270675 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.270769 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:49Z","lastTransitionTime":"2025-11-25T07:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.373692 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.373970 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.374074 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.374145 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.374227 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:49Z","lastTransitionTime":"2025-11-25T07:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.476938 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.476983 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.476992 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.477006 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.477016 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:49Z","lastTransitionTime":"2025-11-25T07:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.577894 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs\") pod \"network-metrics-daemon-xqj4m\" (UID: \"e26eab68-d56e-4c83-9888-0a866e549524\") " pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:49 crc kubenswrapper[5043]: E1125 07:16:49.578047 5043 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 07:16:49 crc kubenswrapper[5043]: E1125 07:16:49.578127 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs podName:e26eab68-d56e-4c83-9888-0a866e549524 nodeName:}" failed. No retries permitted until 2025-11-25 07:17:21.57810808 +0000 UTC m=+105.746303851 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs") pod "network-metrics-daemon-xqj4m" (UID: "e26eab68-d56e-4c83-9888-0a866e549524") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.579220 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.579245 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.579256 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.579271 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.579283 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:49Z","lastTransitionTime":"2025-11-25T07:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.681222 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.681265 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.681277 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.681295 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.681306 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:49Z","lastTransitionTime":"2025-11-25T07:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.784339 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.784375 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.784388 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.784403 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.784417 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:49Z","lastTransitionTime":"2025-11-25T07:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.887367 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.887749 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.887874 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.888002 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.888134 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:49Z","lastTransitionTime":"2025-11-25T07:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.962018 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:49 crc kubenswrapper[5043]: E1125 07:16:49.962140 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.990336 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.990583 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.990705 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.990795 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:49 crc kubenswrapper[5043]: I1125 07:16:49.990882 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:49Z","lastTransitionTime":"2025-11-25T07:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.094166 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.094213 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.094227 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.094244 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.094256 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:50Z","lastTransitionTime":"2025-11-25T07:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.196524 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.196786 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.196863 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.196937 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.196999 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:50Z","lastTransitionTime":"2025-11-25T07:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.299280 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.299329 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.299341 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.299360 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.299372 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:50Z","lastTransitionTime":"2025-11-25T07:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.402076 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.402125 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.402139 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.402159 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.402172 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:50Z","lastTransitionTime":"2025-11-25T07:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.505510 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.505564 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.505586 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.505648 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.505666 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:50Z","lastTransitionTime":"2025-11-25T07:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.608649 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.608679 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.608688 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.608702 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.608713 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:50Z","lastTransitionTime":"2025-11-25T07:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.710742 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.710803 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.710821 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.710846 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.710864 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:50Z","lastTransitionTime":"2025-11-25T07:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.813906 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.813956 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.813972 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.813995 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.814012 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:50Z","lastTransitionTime":"2025-11-25T07:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.916598 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.916672 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.916684 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.916699 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.916711 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:50Z","lastTransitionTime":"2025-11-25T07:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.962002 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:50 crc kubenswrapper[5043]: E1125 07:16:50.962388 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.962032 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:50 crc kubenswrapper[5043]: E1125 07:16:50.963172 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:50 crc kubenswrapper[5043]: I1125 07:16:50.962017 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:50 crc kubenswrapper[5043]: E1125 07:16:50.963655 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.019320 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.019639 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.019796 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.019915 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.019998 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:51Z","lastTransitionTime":"2025-11-25T07:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.122349 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.122763 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.122872 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.122961 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.123044 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:51Z","lastTransitionTime":"2025-11-25T07:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.225497 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.225563 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.225629 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.225653 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.225669 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:51Z","lastTransitionTime":"2025-11-25T07:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.328289 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.328387 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.328409 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.328435 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.328450 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:51Z","lastTransitionTime":"2025-11-25T07:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.431928 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.432008 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.432042 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.432067 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.432089 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:51Z","lastTransitionTime":"2025-11-25T07:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.535517 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.535577 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.535598 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.535657 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.535680 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:51Z","lastTransitionTime":"2025-11-25T07:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.638357 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.638423 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.638441 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.638468 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.638487 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:51Z","lastTransitionTime":"2025-11-25T07:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.742845 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.742923 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.742947 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.742983 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.743048 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:51Z","lastTransitionTime":"2025-11-25T07:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.846022 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.846088 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.846113 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.846142 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.846167 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:51Z","lastTransitionTime":"2025-11-25T07:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.949456 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.949519 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.949568 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.949591 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.949676 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:51Z","lastTransitionTime":"2025-11-25T07:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:51 crc kubenswrapper[5043]: I1125 07:16:51.962051 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:51 crc kubenswrapper[5043]: E1125 07:16:51.962231 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.053005 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.053933 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.054085 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.054227 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.054372 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:52Z","lastTransitionTime":"2025-11-25T07:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.164692 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.164755 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.164779 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.164810 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.164831 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:52Z","lastTransitionTime":"2025-11-25T07:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.268238 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.268789 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.268943 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.269071 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.269191 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:52Z","lastTransitionTime":"2025-11-25T07:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.372745 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.372804 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.372989 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.373033 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.373056 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:52Z","lastTransitionTime":"2025-11-25T07:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.441015 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5gnzs_6aa0c167-9335-44ce-975c-715ce1f43383/kube-multus/0.log" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.441076 5043 generic.go:334] "Generic (PLEG): container finished" podID="6aa0c167-9335-44ce-975c-715ce1f43383" containerID="c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a" exitCode=1 Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.441117 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5gnzs" event={"ID":"6aa0c167-9335-44ce-975c-715ce1f43383","Type":"ContainerDied","Data":"c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a"} Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.441677 5043 scope.go:117] "RemoveContainer" containerID="c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.464459 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.475531 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.475583 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.475595 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.475655 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.475682 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:52Z","lastTransitionTime":"2025-11-25T07:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.490740 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:51Z\\\",\\\"message\\\":\\\"2025-11-25T07:16:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e933b4bb-47fc-4136-b291-1892215953d7\\\\n2025-11-25T07:16:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e933b4bb-47fc-4136-b291-1892215953d7 to /host/opt/cni/bin/\\\\n2025-11-25T07:16:06Z [verbose] multus-daemon started\\\\n2025-11-25T07:16:06Z [verbose] Readiness Indicator file check\\\\n2025-11-25T07:16:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.506873 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.526683 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.546686 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.563261 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.578627 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.578659 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.578668 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.578555 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.578683 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.578902 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:52Z","lastTransitionTime":"2025-11-25T07:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.594741 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.608488 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.626508 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:30Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 07:16:30.855199 6759 obj_retry.go:551] Creating *factory.egressNode crc took: 2.291452ms\\\\nI1125 07:16:30.855237 6759 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 07:16:30.855283 6759 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 07:16:30.855283 6759 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 07:16:30.855300 6759 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 07:16:30.855320 6759 factory.go:656] Stopping watch factory\\\\nI1125 07:16:30.855341 6759 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 07:16:30.855357 6759 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 07:16:30.855674 6759 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 07:16:30.855789 6759 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 07:16:30.855831 6759 ovnkube.go:599] Stopped ovnkube\\\\nI1125 07:16:30.855858 6759 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 07:16:30.855958 6759 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.638764 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed17eb56-5921-4618-8de7-166c01019089\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1448111cb3e3b27389baafd33293fcb690b89e0f54007afba41778c91cb8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2869a4db622ee8d96a52f7c058914b01302bbeac8b81ed67aa9c87f77a7f7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5604ac4dec090e082d1843bc46f0857aad493c97e1d91208a938e7405333a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.649755 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.664779 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.677185 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.681504 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.681553 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.681564 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.681578 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.681589 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:52Z","lastTransitionTime":"2025-11-25T07:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.686024 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.695442 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.705793 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ff5d7-6edd-457b-a3ce-daa78aa19170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf975385c9f583d850c769c4b634f17f0e0b358fee121274a50c149726bf5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.715660 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.729802 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:52Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.783974 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.784055 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.784070 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.784086 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.784099 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:52Z","lastTransitionTime":"2025-11-25T07:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.886416 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.886467 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.886480 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.886497 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.886512 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:52Z","lastTransitionTime":"2025-11-25T07:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.962590 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.962675 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.962627 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:52 crc kubenswrapper[5043]: E1125 07:16:52.962808 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:52 crc kubenswrapper[5043]: E1125 07:16:52.962888 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:52 crc kubenswrapper[5043]: E1125 07:16:52.962961 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.989094 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.989132 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.989142 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.989157 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:52 crc kubenswrapper[5043]: I1125 07:16:52.989168 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:52Z","lastTransitionTime":"2025-11-25T07:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.091989 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.092040 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.092058 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.092081 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.092098 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:53Z","lastTransitionTime":"2025-11-25T07:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.195508 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.195571 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.195596 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.195661 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.195687 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:53Z","lastTransitionTime":"2025-11-25T07:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.298502 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.298568 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.298590 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.298667 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.298693 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:53Z","lastTransitionTime":"2025-11-25T07:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.401374 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.401438 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.401456 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.401478 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.401494 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:53Z","lastTransitionTime":"2025-11-25T07:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.447778 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5gnzs_6aa0c167-9335-44ce-975c-715ce1f43383/kube-multus/0.log" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.447860 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5gnzs" event={"ID":"6aa0c167-9335-44ce-975c-715ce1f43383","Type":"ContainerStarted","Data":"4066fa7f0a925be9090ea5c1746c5f49e5e16dbfbaf8855136d7417ba73fb59c"} Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.463064 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.480341 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ff5d7-6edd-457b-a3ce-daa78aa19170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf975385c9f583d850c769c4b634f17f0e0b358fee121274a50c149726bf5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.492746 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.505523 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.505795 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.505912 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.506041 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.506173 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:53Z","lastTransitionTime":"2025-11-25T07:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.510084 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.527648 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.543407 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4066fa7f0a925be9090ea5c1746c5f49e5e16dbfbaf8855136d7417ba73fb59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:51Z\\\",\\\"message\\\":\\\"2025-11-25T07:16:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e933b4bb-47fc-4136-b291-1892215953d7\\\\n2025-11-25T07:16:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e933b4bb-47fc-4136-b291-1892215953d7 to /host/opt/cni/bin/\\\\n2025-11-25T07:16:06Z [verbose] multus-daemon started\\\\n2025-11-25T07:16:06Z [verbose] Readiness Indicator file check\\\\n2025-11-25T07:16:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.561433 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.584821 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.609685 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.609727 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.609739 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.609758 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.609772 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:53Z","lastTransitionTime":"2025-11-25T07:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.611512 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.633339 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.652958 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.675082 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.693690 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.712501 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.712555 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.712575 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.712602 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.712658 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:53Z","lastTransitionTime":"2025-11-25T07:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.728225 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:30Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 07:16:30.855199 6759 obj_retry.go:551] Creating *factory.egressNode crc took: 2.291452ms\\\\nI1125 07:16:30.855237 6759 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 07:16:30.855283 6759 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 07:16:30.855283 6759 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 07:16:30.855300 6759 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 07:16:30.855320 6759 factory.go:656] Stopping watch factory\\\\nI1125 07:16:30.855341 6759 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 07:16:30.855357 6759 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 07:16:30.855674 6759 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 07:16:30.855789 6759 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 07:16:30.855831 6759 ovnkube.go:599] Stopped ovnkube\\\\nI1125 07:16:30.855858 6759 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 07:16:30.855958 6759 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.747179 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed17eb56-5921-4618-8de7-166c01019089\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1448111cb3e3b27389baafd33293fcb690b89e0f54007afba41778c91cb8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2869a4db622ee8d96a52f7c058914b01302bbeac8b81ed67aa9c87f77a7f7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5604ac4dec090e082d1843bc46f0857aad493c97e1d91208a938e7405333a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.767734 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.789853 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.811328 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.815699 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.815779 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.815801 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.815827 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.815845 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:53Z","lastTransitionTime":"2025-11-25T07:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.830820 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:53Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.919129 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.919175 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.919192 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.919212 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.919226 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:53Z","lastTransitionTime":"2025-11-25T07:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:53 crc kubenswrapper[5043]: I1125 07:16:53.962703 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:53 crc kubenswrapper[5043]: E1125 07:16:53.962980 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.022791 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.022872 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.022891 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.022916 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.022936 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:54Z","lastTransitionTime":"2025-11-25T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.125174 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.125240 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.125259 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.125285 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.125307 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:54Z","lastTransitionTime":"2025-11-25T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.229125 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.229190 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.229210 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.229240 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.229258 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:54Z","lastTransitionTime":"2025-11-25T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.332271 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.332332 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.332349 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.332373 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.332391 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:54Z","lastTransitionTime":"2025-11-25T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.417254 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.417330 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.417349 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.417374 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.417392 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:54Z","lastTransitionTime":"2025-11-25T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:54 crc kubenswrapper[5043]: E1125 07:16:54.442465 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:54Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.448004 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.448069 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.448097 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.448125 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.448144 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:54Z","lastTransitionTime":"2025-11-25T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:54 crc kubenswrapper[5043]: E1125 07:16:54.468241 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:54Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.472968 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.473035 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.473061 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.473091 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.473116 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:54Z","lastTransitionTime":"2025-11-25T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:54 crc kubenswrapper[5043]: E1125 07:16:54.493764 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:54Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.498694 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.498744 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.498760 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.498787 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.498811 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:54Z","lastTransitionTime":"2025-11-25T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:54 crc kubenswrapper[5043]: E1125 07:16:54.519029 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:54Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.524160 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.524229 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.524255 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.524280 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.524300 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:54Z","lastTransitionTime":"2025-11-25T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:54 crc kubenswrapper[5043]: E1125 07:16:54.547327 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:54Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:54 crc kubenswrapper[5043]: E1125 07:16:54.547555 5043 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.549647 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.549710 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.549734 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.549763 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.549786 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:54Z","lastTransitionTime":"2025-11-25T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.657708 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.657787 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.657823 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.657852 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.657874 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:54Z","lastTransitionTime":"2025-11-25T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.761927 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.762029 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.762047 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.762071 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.762090 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:54Z","lastTransitionTime":"2025-11-25T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.864494 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.864585 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.864656 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.864694 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.864712 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:54Z","lastTransitionTime":"2025-11-25T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.961981 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.962064 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.962014 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:54 crc kubenswrapper[5043]: E1125 07:16:54.962185 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:54 crc kubenswrapper[5043]: E1125 07:16:54.962380 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:54 crc kubenswrapper[5043]: E1125 07:16:54.962499 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.967309 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.967449 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.967464 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.967482 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:54 crc kubenswrapper[5043]: I1125 07:16:54.967494 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:54Z","lastTransitionTime":"2025-11-25T07:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.071041 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.071112 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.071129 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.071156 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.071174 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:55Z","lastTransitionTime":"2025-11-25T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.174116 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.174187 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.174211 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.174241 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.174265 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:55Z","lastTransitionTime":"2025-11-25T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.277892 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.277948 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.277965 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.277987 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.278003 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:55Z","lastTransitionTime":"2025-11-25T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.381024 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.381092 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.381109 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.381134 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.381151 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:55Z","lastTransitionTime":"2025-11-25T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.484142 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.484191 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.484201 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.484219 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.484230 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:55Z","lastTransitionTime":"2025-11-25T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.587367 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.587431 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.587448 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.587471 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.587493 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:55Z","lastTransitionTime":"2025-11-25T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.690940 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.690996 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.691017 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.691039 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.691055 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:55Z","lastTransitionTime":"2025-11-25T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.794021 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.794081 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.794099 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.794123 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.794154 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:55Z","lastTransitionTime":"2025-11-25T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.896943 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.897088 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.897110 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.897173 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.897194 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:55Z","lastTransitionTime":"2025-11-25T07:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:55 crc kubenswrapper[5043]: I1125 07:16:55.962548 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:55 crc kubenswrapper[5043]: E1125 07:16:55.962764 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.000235 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.000290 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.000306 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.000329 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.000345 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:56Z","lastTransitionTime":"2025-11-25T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.102596 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.102658 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.102669 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.102682 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.102694 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:56Z","lastTransitionTime":"2025-11-25T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.205716 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.205790 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.205815 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.205844 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.205869 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:56Z","lastTransitionTime":"2025-11-25T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.309273 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.309335 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.309349 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.309371 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.309387 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:56Z","lastTransitionTime":"2025-11-25T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.412436 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.412490 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.412506 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.412530 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.412548 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:56Z","lastTransitionTime":"2025-11-25T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.515717 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.515792 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.515813 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.515830 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.515843 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:56Z","lastTransitionTime":"2025-11-25T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.619065 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.619139 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.619161 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.619188 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.619211 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:56Z","lastTransitionTime":"2025-11-25T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.722339 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.722393 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.722413 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.722439 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.722456 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:56Z","lastTransitionTime":"2025-11-25T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.825683 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.825748 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.825775 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.825807 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.825834 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:56Z","lastTransitionTime":"2025-11-25T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.928382 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.928455 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.928477 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.928531 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.928554 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:56Z","lastTransitionTime":"2025-11-25T07:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.961875 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.961972 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:56 crc kubenswrapper[5043]: E1125 07:16:56.962110 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.962198 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:56 crc kubenswrapper[5043]: E1125 07:16:56.962397 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:56 crc kubenswrapper[5043]: E1125 07:16:56.962517 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.982141 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:56Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:56 crc kubenswrapper[5043]: I1125 07:16:56.997525 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:56Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.020011 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ff5d7-6edd-457b-a3ce-daa78aa19170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf975385c9f583d850c769c4b634f17f0e0b358fee121274a50c149726bf5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:57Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.032033 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.032111 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.032124 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.032141 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.032178 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:57Z","lastTransitionTime":"2025-11-25T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.038728 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:57Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.058814 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:57Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.085013 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:57Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.101756 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4066fa7f0a925be9090ea5c1746c5f49e5e16dbfbaf8855136d7417ba73fb59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:51Z\\\",\\\"message\\\":\\\"2025-11-25T07:16:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e933b4bb-47fc-4136-b291-1892215953d7\\\\n2025-11-25T07:16:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e933b4bb-47fc-4136-b291-1892215953d7 to /host/opt/cni/bin/\\\\n2025-11-25T07:16:06Z [verbose] multus-daemon started\\\\n2025-11-25T07:16:06Z [verbose] Readiness Indicator file check\\\\n2025-11-25T07:16:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:57Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.114477 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:57Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.130476 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:57Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.135005 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.135036 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.135046 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.135061 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.135072 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:57Z","lastTransitionTime":"2025-11-25T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.152598 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:57Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.163952 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:57Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.175633 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:57Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.185369 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:57Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.194151 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:57Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.219700 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:30Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 07:16:30.855199 6759 obj_retry.go:551] Creating *factory.egressNode crc took: 2.291452ms\\\\nI1125 07:16:30.855237 6759 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 07:16:30.855283 6759 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 07:16:30.855283 6759 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 07:16:30.855300 6759 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 07:16:30.855320 6759 factory.go:656] Stopping watch factory\\\\nI1125 07:16:30.855341 6759 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 07:16:30.855357 6759 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 07:16:30.855674 6759 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 07:16:30.855789 6759 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 07:16:30.855831 6759 ovnkube.go:599] Stopped ovnkube\\\\nI1125 07:16:30.855858 6759 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 07:16:30.855958 6759 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:57Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.229473 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed17eb56-5921-4618-8de7-166c01019089\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1448111cb3e3b27389baafd33293fcb690b89e0f54007afba41778c91cb8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2869a4db622ee8d96a52f7c058914b01302bbeac8b81ed67aa9c87f77a7f7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5604ac4dec090e082d1843bc46f0857aad493c97e1d91208a938e7405333a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:57Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.236808 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.236838 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.236856 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.236873 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.236883 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:57Z","lastTransitionTime":"2025-11-25T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.240017 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:57Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.251562 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:57Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.259816 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:16:57Z is after 2025-08-24T17:21:41Z" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.339034 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.339072 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.339080 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.339092 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.339101 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:57Z","lastTransitionTime":"2025-11-25T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.442384 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.442450 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.442469 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.442493 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.442510 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:57Z","lastTransitionTime":"2025-11-25T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.544674 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.545005 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.545184 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.545352 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.545529 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:57Z","lastTransitionTime":"2025-11-25T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.648791 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.648825 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.648834 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.648846 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.648854 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:57Z","lastTransitionTime":"2025-11-25T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.751795 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.752126 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.752254 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.752368 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.752489 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:57Z","lastTransitionTime":"2025-11-25T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.854822 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.854872 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.854887 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.854935 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.854950 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:57Z","lastTransitionTime":"2025-11-25T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.957812 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.957857 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.957869 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.957888 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.957902 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:57Z","lastTransitionTime":"2025-11-25T07:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:57 crc kubenswrapper[5043]: I1125 07:16:57.962354 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:57 crc kubenswrapper[5043]: E1125 07:16:57.962495 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.060091 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.060151 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.060160 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.060176 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.060184 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:58Z","lastTransitionTime":"2025-11-25T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.163709 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.163757 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.163767 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.163781 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.163795 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:58Z","lastTransitionTime":"2025-11-25T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.267462 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.267509 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.267520 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.267536 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.267548 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:58Z","lastTransitionTime":"2025-11-25T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.371130 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.371223 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.371243 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.371847 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.371923 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:58Z","lastTransitionTime":"2025-11-25T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.474476 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.474520 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.474535 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.474556 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.474572 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:58Z","lastTransitionTime":"2025-11-25T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.578825 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.578892 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.578911 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.578934 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.578950 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:58Z","lastTransitionTime":"2025-11-25T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.682475 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.682543 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.682561 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.682585 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.682651 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:58Z","lastTransitionTime":"2025-11-25T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.785739 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.785796 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.785815 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.785838 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.785855 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:58Z","lastTransitionTime":"2025-11-25T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.889148 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.889293 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.889319 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.889343 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.889364 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:58Z","lastTransitionTime":"2025-11-25T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.962347 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.962439 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:16:58 crc kubenswrapper[5043]: E1125 07:16:58.962515 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.962532 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:16:58 crc kubenswrapper[5043]: E1125 07:16:58.962686 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:16:58 crc kubenswrapper[5043]: E1125 07:16:58.962809 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.991426 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.991453 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.991465 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.991480 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:58 crc kubenswrapper[5043]: I1125 07:16:58.991492 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:58Z","lastTransitionTime":"2025-11-25T07:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.094010 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.094045 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.094056 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.094070 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.094082 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:59Z","lastTransitionTime":"2025-11-25T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.196866 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.197103 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.197120 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.197145 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.197174 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:59Z","lastTransitionTime":"2025-11-25T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.300029 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.300109 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.300132 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.300161 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.300187 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:59Z","lastTransitionTime":"2025-11-25T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.404378 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.404448 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.404465 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.404491 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.404510 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:59Z","lastTransitionTime":"2025-11-25T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.507956 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.508025 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.508041 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.508065 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.508082 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:59Z","lastTransitionTime":"2025-11-25T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.610895 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.610935 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.610948 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.610969 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.610981 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:59Z","lastTransitionTime":"2025-11-25T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.714294 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.714355 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.714374 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.714397 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.714414 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:59Z","lastTransitionTime":"2025-11-25T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.816111 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.816343 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.816421 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.816483 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.816554 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:59Z","lastTransitionTime":"2025-11-25T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.919929 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.920328 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.920464 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.920654 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.920851 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:16:59Z","lastTransitionTime":"2025-11-25T07:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:16:59 crc kubenswrapper[5043]: I1125 07:16:59.961678 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:16:59 crc kubenswrapper[5043]: E1125 07:16:59.962021 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.024996 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.025063 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.025083 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.025107 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.025124 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:00Z","lastTransitionTime":"2025-11-25T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.128332 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.128392 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.128414 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.128446 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.128470 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:00Z","lastTransitionTime":"2025-11-25T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.231704 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.231949 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.232039 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.232144 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.232287 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:00Z","lastTransitionTime":"2025-11-25T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.335681 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.335761 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.335785 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.335820 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.335843 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:00Z","lastTransitionTime":"2025-11-25T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.441479 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.441519 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.441535 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.441557 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.441572 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:00Z","lastTransitionTime":"2025-11-25T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.545669 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.545738 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.545756 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.545791 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.545810 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:00Z","lastTransitionTime":"2025-11-25T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.649123 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.649222 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.649249 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.649285 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.649315 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:00Z","lastTransitionTime":"2025-11-25T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.752715 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.752763 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.752771 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.752787 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.752796 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:00Z","lastTransitionTime":"2025-11-25T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.805183 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:00 crc kubenswrapper[5043]: E1125 07:17:00.805343 5043 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 07:17:00 crc kubenswrapper[5043]: E1125 07:17:00.805438 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 07:18:04.805414431 +0000 UTC m=+148.973610162 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.855551 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.855708 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.855735 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.855769 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.855791 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:00Z","lastTransitionTime":"2025-11-25T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.906585 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.906783 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:00 crc kubenswrapper[5043]: E1125 07:17:00.906857 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:04.906810672 +0000 UTC m=+149.075006433 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.907036 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:00 crc kubenswrapper[5043]: E1125 07:17:00.907066 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 07:17:00 crc kubenswrapper[5043]: E1125 07:17:00.907126 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 07:17:00 crc kubenswrapper[5043]: E1125 07:17:00.907143 5043 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:17:00 crc kubenswrapper[5043]: E1125 07:17:00.907198 5043 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.907094 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:00 crc kubenswrapper[5043]: E1125 07:17:00.907236 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 07:18:04.907206932 +0000 UTC m=+149.075402843 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:17:00 crc kubenswrapper[5043]: E1125 07:17:00.907372 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 07:18:04.907342595 +0000 UTC m=+149.075538326 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 07:17:00 crc kubenswrapper[5043]: E1125 07:17:00.907465 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 07:17:00 crc kubenswrapper[5043]: E1125 07:17:00.907507 5043 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 07:17:00 crc kubenswrapper[5043]: E1125 07:17:00.907588 5043 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:17:00 crc kubenswrapper[5043]: E1125 07:17:00.907757 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 07:18:04.907681584 +0000 UTC m=+149.075877515 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.962322 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.962322 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.962514 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:00 crc kubenswrapper[5043]: E1125 07:17:00.962581 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:00 crc kubenswrapper[5043]: E1125 07:17:00.962799 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:00 crc kubenswrapper[5043]: E1125 07:17:00.963011 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.963272 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.963814 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.964019 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.964226 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:00 crc kubenswrapper[5043]: I1125 07:17:00.964380 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:00Z","lastTransitionTime":"2025-11-25T07:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.067912 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.067971 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.067992 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.068014 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.068030 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:01Z","lastTransitionTime":"2025-11-25T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.171065 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.171118 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.171133 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.171152 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.171165 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:01Z","lastTransitionTime":"2025-11-25T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.276815 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.277117 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.277199 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.277283 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.277387 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:01Z","lastTransitionTime":"2025-11-25T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.380218 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.380901 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.381007 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.381095 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.381179 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:01Z","lastTransitionTime":"2025-11-25T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.483691 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.483966 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.484056 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.484149 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.484247 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:01Z","lastTransitionTime":"2025-11-25T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.586511 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.586556 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.586565 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.586580 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.586592 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:01Z","lastTransitionTime":"2025-11-25T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.689699 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.689758 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.689776 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.689799 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.689816 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:01Z","lastTransitionTime":"2025-11-25T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.793185 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.793261 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.793279 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.793306 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.793325 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:01Z","lastTransitionTime":"2025-11-25T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.896256 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.896296 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.896308 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.896325 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.896336 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:01Z","lastTransitionTime":"2025-11-25T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.962396 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:01 crc kubenswrapper[5043]: E1125 07:17:01.962947 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.964329 5043 scope.go:117] "RemoveContainer" containerID="7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.998706 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.998739 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.998750 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.998767 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:01 crc kubenswrapper[5043]: I1125 07:17:01.998777 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:01Z","lastTransitionTime":"2025-11-25T07:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.102252 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.102705 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.102715 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.102731 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.102744 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:02Z","lastTransitionTime":"2025-11-25T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.206152 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.206230 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.206252 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.206283 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.206305 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:02Z","lastTransitionTime":"2025-11-25T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.310082 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.310118 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.310129 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.310145 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.310156 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:02Z","lastTransitionTime":"2025-11-25T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.412806 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.412851 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.412866 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.412886 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.412899 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:02Z","lastTransitionTime":"2025-11-25T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.483393 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovnkube-controller/2.log" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.486510 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerStarted","Data":"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63"} Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.487199 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.504211 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed17eb56-5921-4618-8de7-166c01019089\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1448111cb3e3b27389baafd33293fcb690b89e0f54007afba41778c91cb8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2869a4db622ee8d96a52f7c058914b01302bbeac8b81ed67aa9c87f77a7f7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5604ac4dec090e082d1843bc46f0857aad493c97e1d91208a938e7405333a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.515798 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.515849 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.515863 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.515885 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.515900 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:02Z","lastTransitionTime":"2025-11-25T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.520790 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.534201 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.553194 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.571727 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.590791 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:30Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 07:16:30.855199 6759 obj_retry.go:551] Creating *factory.egressNode crc took: 2.291452ms\\\\nI1125 07:16:30.855237 6759 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 07:16:30.855283 6759 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 07:16:30.855283 6759 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 07:16:30.855300 6759 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 07:16:30.855320 6759 factory.go:656] Stopping watch factory\\\\nI1125 07:16:30.855341 6759 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 07:16:30.855357 6759 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 07:16:30.855674 6759 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 07:16:30.855789 6759 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 07:16:30.855831 6759 ovnkube.go:599] Stopped ovnkube\\\\nI1125 07:16:30.855858 6759 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 07:16:30.855958 6759 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.601205 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.609931 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.618529 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.618557 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.618568 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.618584 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.618594 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:02Z","lastTransitionTime":"2025-11-25T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.620541 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ff5d7-6edd-457b-a3ce-daa78aa19170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf975385c9f583d850c769c4b634f17f0e0b358fee121274a50c149726bf5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.639452 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.653318 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.665248 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.683397 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.707244 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.718581 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.720379 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.720433 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.720446 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.720463 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.720475 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:02Z","lastTransitionTime":"2025-11-25T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.736132 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.751764 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.770105 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4066fa7f0a925be9090ea5c1746c5f49e5e16dbfbaf8855136d7417ba73fb59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:51Z\\\",\\\"message\\\":\\\"2025-11-25T07:16:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e933b4bb-47fc-4136-b291-1892215953d7\\\\n2025-11-25T07:16:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e933b4bb-47fc-4136-b291-1892215953d7 to /host/opt/cni/bin/\\\\n2025-11-25T07:16:06Z [verbose] multus-daemon started\\\\n2025-11-25T07:16:06Z [verbose] Readiness Indicator file check\\\\n2025-11-25T07:16:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.783274 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:02Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.822947 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.822993 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.823006 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.823025 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.823038 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:02Z","lastTransitionTime":"2025-11-25T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.926238 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.926280 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.926293 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.926309 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.926321 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:02Z","lastTransitionTime":"2025-11-25T07:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.962162 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.962259 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:02 crc kubenswrapper[5043]: E1125 07:17:02.962511 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:02 crc kubenswrapper[5043]: I1125 07:17:02.962643 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:02 crc kubenswrapper[5043]: E1125 07:17:02.962722 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:02 crc kubenswrapper[5043]: E1125 07:17:02.962875 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.028810 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.028860 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.028876 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.028895 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.028911 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:03Z","lastTransitionTime":"2025-11-25T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.131473 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.131527 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.131535 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.131549 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.131558 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:03Z","lastTransitionTime":"2025-11-25T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.234202 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.234244 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.234255 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.234273 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.234284 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:03Z","lastTransitionTime":"2025-11-25T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.336071 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.336167 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.336179 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.336194 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.336204 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:03Z","lastTransitionTime":"2025-11-25T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.439105 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.439166 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.439188 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.439218 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.439242 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:03Z","lastTransitionTime":"2025-11-25T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.492694 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovnkube-controller/3.log" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.493787 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovnkube-controller/2.log" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.497350 5043 generic.go:334] "Generic (PLEG): container finished" podID="a8785a4c-82ff-4a78-83a0-463e977df530" containerID="37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63" exitCode=1 Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.497417 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerDied","Data":"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63"} Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.497511 5043 scope.go:117] "RemoveContainer" containerID="7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.498222 5043 scope.go:117] "RemoveContainer" containerID="37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63" Nov 25 07:17:03 crc kubenswrapper[5043]: E1125 07:17:03.498412 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.536118 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.542530 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.542568 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.542580 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.542598 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.542631 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:03Z","lastTransitionTime":"2025-11-25T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.554058 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.567008 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.584407 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.599118 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4066fa7f0a925be9090ea5c1746c5f49e5e16dbfbaf8855136d7417ba73fb59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:51Z\\\",\\\"message\\\":\\\"2025-11-25T07:16:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e933b4bb-47fc-4136-b291-1892215953d7\\\\n2025-11-25T07:16:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e933b4bb-47fc-4136-b291-1892215953d7 to /host/opt/cni/bin/\\\\n2025-11-25T07:16:06Z [verbose] multus-daemon started\\\\n2025-11-25T07:16:06Z [verbose] Readiness Indicator file check\\\\n2025-11-25T07:16:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.611031 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.623907 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.637933 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed17eb56-5921-4618-8de7-166c01019089\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1448111cb3e3b27389baafd33293fcb690b89e0f54007afba41778c91cb8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2869a4db622ee8d96a52f7c058914b01302bbeac8b81ed67aa9c87f77a7f7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5604ac4dec090e082d1843bc46f0857aad493c97e1d91208a938e7405333a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.644956 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.644994 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.645008 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.645029 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.645043 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:03Z","lastTransitionTime":"2025-11-25T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.655463 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.670285 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.683004 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.693985 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.709388 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7471bf5175965f44ddc7daee05fb76f8bf12a9b77d5ff4c02b6243073f9e9dd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:30Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 07:16:30.855199 6759 obj_retry.go:551] Creating *factory.egressNode crc took: 2.291452ms\\\\nI1125 07:16:30.855237 6759 factory.go:1336] Added *v1.Node event handler 7\\\\nI1125 07:16:30.855283 6759 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1125 07:16:30.855283 6759 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 07:16:30.855300 6759 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 07:16:30.855320 6759 factory.go:656] Stopping watch factory\\\\nI1125 07:16:30.855341 6759 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 07:16:30.855357 6759 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 07:16:30.855674 6759 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1125 07:16:30.855789 6759 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1125 07:16:30.855831 6759 ovnkube.go:599] Stopped ovnkube\\\\nI1125 07:16:30.855858 6759 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 07:16:30.855958 6759 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:17:03Z\\\",\\\"message\\\":\\\"ct:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1125 07:17:02.968098 7157 ovnkube.go:599] Stopped ovnkube\\\\nI1125 07:17:02.967944 7157 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 07:17:02.968197 7157 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 07:17:02.968285 7157 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:17:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.719591 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.735881 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.746969 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ff5d7-6edd-457b-a3ce-daa78aa19170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf975385c9f583d850c769c4b634f17f0e0b358fee121274a50c149726bf5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.748053 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.748114 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.748131 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.748155 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.748170 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:03Z","lastTransitionTime":"2025-11-25T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.758456 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.768200 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.776524 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:03Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.856565 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.856593 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.856631 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.856653 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.856664 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:03Z","lastTransitionTime":"2025-11-25T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.959819 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.959887 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.959903 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.959930 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.959948 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:03Z","lastTransitionTime":"2025-11-25T07:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:03 crc kubenswrapper[5043]: I1125 07:17:03.962387 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:03 crc kubenswrapper[5043]: E1125 07:17:03.962638 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.062359 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.062481 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.062508 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.062535 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.062556 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:04Z","lastTransitionTime":"2025-11-25T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.164888 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.164929 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.164940 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.164955 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.164966 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:04Z","lastTransitionTime":"2025-11-25T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.267042 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.267085 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.267094 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.267109 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.267118 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:04Z","lastTransitionTime":"2025-11-25T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.369725 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.369776 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.369793 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.369814 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.369831 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:04Z","lastTransitionTime":"2025-11-25T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.472781 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.472855 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.472876 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.472904 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.472923 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:04Z","lastTransitionTime":"2025-11-25T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.505108 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovnkube-controller/3.log" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.510097 5043 scope.go:117] "RemoveContainer" containerID="37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63" Nov 25 07:17:04 crc kubenswrapper[5043]: E1125 07:17:04.510554 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.526776 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.538979 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.552957 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ff5d7-6edd-457b-a3ce-daa78aa19170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf975385c9f583d850c769c4b634f17f0e0b358fee121274a50c149726bf5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.568997 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.578975 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.579015 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.579028 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.579045 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.579057 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:04Z","lastTransitionTime":"2025-11-25T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.584405 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.599285 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.623406 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.637658 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.652029 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.669055 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.681982 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.682018 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.682028 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.682043 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.682055 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:04Z","lastTransitionTime":"2025-11-25T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.688277 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4066fa7f0a925be9090ea5c1746c5f49e5e16dbfbaf8855136d7417ba73fb59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:51Z\\\",\\\"message\\\":\\\"2025-11-25T07:16:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e933b4bb-47fc-4136-b291-1892215953d7\\\\n2025-11-25T07:16:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e933b4bb-47fc-4136-b291-1892215953d7 to /host/opt/cni/bin/\\\\n2025-11-25T07:16:06Z [verbose] multus-daemon started\\\\n2025-11-25T07:16:06Z [verbose] Readiness Indicator file check\\\\n2025-11-25T07:16:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.701640 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.720695 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.736139 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed17eb56-5921-4618-8de7-166c01019089\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1448111cb3e3b27389baafd33293fcb690b89e0f54007afba41778c91cb8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2869a4db622ee8d96a52f7c058914b01302bbeac8b81ed67aa9c87f77a7f7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5604ac4dec090e082d1843bc46f0857aad493c97e1d91208a938e7405333a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.755902 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.772083 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.785074 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.785136 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.785155 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.785181 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.785200 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:04Z","lastTransitionTime":"2025-11-25T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.791301 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.807705 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.838885 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:17:03Z\\\",\\\"message\\\":\\\"ct:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1125 07:17:02.968098 7157 ovnkube.go:599] Stopped ovnkube\\\\nI1125 07:17:02.967944 7157 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 07:17:02.968197 7157 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 07:17:02.968285 7157 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:17:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.888371 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.888399 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.888411 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.888429 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.888439 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:04Z","lastTransitionTime":"2025-11-25T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.892268 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.892342 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.892365 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.892396 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.892420 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:04Z","lastTransitionTime":"2025-11-25T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:04 crc kubenswrapper[5043]: E1125 07:17:04.910880 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.916691 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.916754 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.916770 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.916795 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.916810 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:04Z","lastTransitionTime":"2025-11-25T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:04 crc kubenswrapper[5043]: E1125 07:17:04.934191 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.945426 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.945505 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.945533 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.945564 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.945588 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:04Z","lastTransitionTime":"2025-11-25T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.961942 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.961932 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.962086 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:04 crc kubenswrapper[5043]: E1125 07:17:04.963050 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:04 crc kubenswrapper[5043]: E1125 07:17:04.963110 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:04 crc kubenswrapper[5043]: E1125 07:17:04.962883 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:04 crc kubenswrapper[5043]: E1125 07:17:04.963058 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.967187 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.967219 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.967231 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.967251 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.967264 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:04Z","lastTransitionTime":"2025-11-25T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:04 crc kubenswrapper[5043]: E1125 07:17:04.982995 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:04Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.987217 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.987258 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.987268 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.987285 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:04 crc kubenswrapper[5043]: I1125 07:17:04.987297 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:04Z","lastTransitionTime":"2025-11-25T07:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:05 crc kubenswrapper[5043]: E1125 07:17:05.003334 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:05Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:05 crc kubenswrapper[5043]: E1125 07:17:05.003446 5043 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.004878 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.005070 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.005202 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.005352 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.005473 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:05Z","lastTransitionTime":"2025-11-25T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.108413 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.108459 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.108470 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.108487 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.108499 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:05Z","lastTransitionTime":"2025-11-25T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.211501 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.211537 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.211548 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.211565 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.211575 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:05Z","lastTransitionTime":"2025-11-25T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.313740 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.313791 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.313801 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.313819 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.313830 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:05Z","lastTransitionTime":"2025-11-25T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.417039 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.417081 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.417089 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.417103 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.417114 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:05Z","lastTransitionTime":"2025-11-25T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.519761 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.519833 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.519855 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.519886 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.519910 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:05Z","lastTransitionTime":"2025-11-25T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.622978 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.623055 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.623075 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.623156 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.623220 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:05Z","lastTransitionTime":"2025-11-25T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.725967 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.726004 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.726014 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.726030 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.726040 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:05Z","lastTransitionTime":"2025-11-25T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.828272 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.828311 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.828323 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.828340 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.828351 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:05Z","lastTransitionTime":"2025-11-25T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.930575 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.930650 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.930666 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.930683 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.930696 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:05Z","lastTransitionTime":"2025-11-25T07:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:05 crc kubenswrapper[5043]: I1125 07:17:05.962779 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:05 crc kubenswrapper[5043]: E1125 07:17:05.963000 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.034359 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.034400 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.034410 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.034426 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.034437 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:06Z","lastTransitionTime":"2025-11-25T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.137660 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.137734 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.137759 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.137789 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.137812 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:06Z","lastTransitionTime":"2025-11-25T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.241202 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.241277 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.241323 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.241350 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.241370 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:06Z","lastTransitionTime":"2025-11-25T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.345078 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.345147 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.345164 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.345188 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.345206 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:06Z","lastTransitionTime":"2025-11-25T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.449329 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.449374 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.449388 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.449408 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.449421 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:06Z","lastTransitionTime":"2025-11-25T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.552553 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.552717 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.552744 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.552774 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.552808 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:06Z","lastTransitionTime":"2025-11-25T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.656479 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.656549 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.656572 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.656636 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.656662 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:06Z","lastTransitionTime":"2025-11-25T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.759772 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.759841 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.759858 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.759891 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.759909 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:06Z","lastTransitionTime":"2025-11-25T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.863064 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.863138 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.863163 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.863191 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.863211 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:06Z","lastTransitionTime":"2025-11-25T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.962177 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.962302 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.962372 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:06 crc kubenswrapper[5043]: E1125 07:17:06.963080 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:06 crc kubenswrapper[5043]: E1125 07:17:06.963400 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:06 crc kubenswrapper[5043]: E1125 07:17:06.963497 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.966255 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.966299 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.966313 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.966332 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.966375 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:06Z","lastTransitionTime":"2025-11-25T07:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.977458 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ff5d7-6edd-457b-a3ce-daa78aa19170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf975385c9f583d850c769c4b634f17f0e0b358fee121274a50c149726bf5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:06 crc kubenswrapper[5043]: I1125 07:17:06.993288 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:06Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.012149 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.029273 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.047204 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.068899 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.068956 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.068978 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.069007 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.069028 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:07Z","lastTransitionTime":"2025-11-25T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.071318 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.102860 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.123464 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.146247 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.162977 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.171380 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.171422 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.171430 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.171444 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.171453 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:07Z","lastTransitionTime":"2025-11-25T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.181401 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4066fa7f0a925be9090ea5c1746c5f49e5e16dbfbaf8855136d7417ba73fb59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:51Z\\\",\\\"message\\\":\\\"2025-11-25T07:16:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e933b4bb-47fc-4136-b291-1892215953d7\\\\n2025-11-25T07:16:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e933b4bb-47fc-4136-b291-1892215953d7 to /host/opt/cni/bin/\\\\n2025-11-25T07:16:06Z [verbose] multus-daemon started\\\\n2025-11-25T07:16:06Z [verbose] Readiness Indicator file check\\\\n2025-11-25T07:16:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.210251 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:17:03Z\\\",\\\"message\\\":\\\"ct:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1125 07:17:02.968098 7157 ovnkube.go:599] Stopped ovnkube\\\\nI1125 07:17:02.967944 7157 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 07:17:02.968197 7157 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 07:17:02.968285 7157 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:17:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.226191 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed17eb56-5921-4618-8de7-166c01019089\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1448111cb3e3b27389baafd33293fcb690b89e0f54007afba41778c91cb8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2869a4db622ee8d96a52f7c058914b01302bbeac8b81ed67aa9c87f77a7f7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5604ac4dec090e082d1843bc46f0857aad493c97e1d91208a938e7405333a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.239946 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.261646 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.273744 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.273794 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.273809 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.273832 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.273849 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:07Z","lastTransitionTime":"2025-11-25T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.281570 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.297902 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.315801 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.326542 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:07Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.377008 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.377103 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.377117 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.377133 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.377144 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:07Z","lastTransitionTime":"2025-11-25T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.479621 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.479675 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.479693 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.479715 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.479730 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:07Z","lastTransitionTime":"2025-11-25T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.582098 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.582155 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.582165 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.582177 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.582185 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:07Z","lastTransitionTime":"2025-11-25T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.688446 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.688873 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.689384 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.689424 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.689448 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:07Z","lastTransitionTime":"2025-11-25T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.792795 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.792832 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.792841 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.792856 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.792865 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:07Z","lastTransitionTime":"2025-11-25T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.896387 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.896750 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.896766 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.896785 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.896797 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:07Z","lastTransitionTime":"2025-11-25T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.962651 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:07 crc kubenswrapper[5043]: E1125 07:17:07.962865 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.999161 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.999192 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.999202 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.999217 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:07 crc kubenswrapper[5043]: I1125 07:17:07.999228 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:07Z","lastTransitionTime":"2025-11-25T07:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.102274 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.102335 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.102348 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.102367 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.102380 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:08Z","lastTransitionTime":"2025-11-25T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.205157 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.205198 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.205210 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.205228 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.205241 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:08Z","lastTransitionTime":"2025-11-25T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.308018 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.308052 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.308076 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.308096 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.308109 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:08Z","lastTransitionTime":"2025-11-25T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.411018 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.411078 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.411113 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.411146 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.411167 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:08Z","lastTransitionTime":"2025-11-25T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.514172 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.514237 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.514247 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.514263 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.514274 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:08Z","lastTransitionTime":"2025-11-25T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.617292 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.617344 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.617356 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.617375 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.617386 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:08Z","lastTransitionTime":"2025-11-25T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.720139 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.720199 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.720218 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.720244 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.720261 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:08Z","lastTransitionTime":"2025-11-25T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.822777 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.822841 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.822863 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.822932 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.822961 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:08Z","lastTransitionTime":"2025-11-25T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.927315 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.927397 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.927421 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.927451 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.927470 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:08Z","lastTransitionTime":"2025-11-25T07:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.962480 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.962575 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:08 crc kubenswrapper[5043]: I1125 07:17:08.962631 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:08 crc kubenswrapper[5043]: E1125 07:17:08.962794 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:08 crc kubenswrapper[5043]: E1125 07:17:08.962918 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:08 crc kubenswrapper[5043]: E1125 07:17:08.963038 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.030109 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.030185 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.030204 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.030229 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.030250 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:09Z","lastTransitionTime":"2025-11-25T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.131991 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.132065 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.132085 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.132103 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.132117 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:09Z","lastTransitionTime":"2025-11-25T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.235567 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.235626 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.235637 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.235653 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.235663 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:09Z","lastTransitionTime":"2025-11-25T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.338510 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.338654 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.338678 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.338707 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.338733 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:09Z","lastTransitionTime":"2025-11-25T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.441314 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.441382 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.441393 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.441440 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.441453 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:09Z","lastTransitionTime":"2025-11-25T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.543664 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.543732 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.543749 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.543772 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.543788 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:09Z","lastTransitionTime":"2025-11-25T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.646645 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.646732 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.646773 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.646808 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.646830 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:09Z","lastTransitionTime":"2025-11-25T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.750579 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.750644 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.750654 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.750713 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.750727 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:09Z","lastTransitionTime":"2025-11-25T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.854202 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.854275 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.854308 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.854336 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.854353 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:09Z","lastTransitionTime":"2025-11-25T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.957876 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.957921 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.957935 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.957962 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.957972 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:09Z","lastTransitionTime":"2025-11-25T07:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:09 crc kubenswrapper[5043]: I1125 07:17:09.962094 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:09 crc kubenswrapper[5043]: E1125 07:17:09.962218 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.060776 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.060810 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.060820 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.060837 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.060848 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:10Z","lastTransitionTime":"2025-11-25T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.163901 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.163960 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.163976 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.164001 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.164017 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:10Z","lastTransitionTime":"2025-11-25T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.267772 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.267841 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.267858 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.267881 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.267899 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:10Z","lastTransitionTime":"2025-11-25T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.370981 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.371109 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.371130 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.371160 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.371183 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:10Z","lastTransitionTime":"2025-11-25T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.474526 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.474573 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.474583 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.474598 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.474625 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:10Z","lastTransitionTime":"2025-11-25T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.577203 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.577239 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.577251 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.577267 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.577277 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:10Z","lastTransitionTime":"2025-11-25T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.680089 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.680155 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.680176 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.680203 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.680224 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:10Z","lastTransitionTime":"2025-11-25T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.787177 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.787252 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.787275 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.787309 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.787343 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:10Z","lastTransitionTime":"2025-11-25T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.890834 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.890886 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.890903 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.890924 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.890940 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:10Z","lastTransitionTime":"2025-11-25T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.962579 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.962652 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.962599 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:10 crc kubenswrapper[5043]: E1125 07:17:10.962780 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:10 crc kubenswrapper[5043]: E1125 07:17:10.962846 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:10 crc kubenswrapper[5043]: E1125 07:17:10.962911 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.994302 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.994343 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.994354 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.994370 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:10 crc kubenswrapper[5043]: I1125 07:17:10.994381 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:10Z","lastTransitionTime":"2025-11-25T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.097464 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.097533 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.097556 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.097585 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.097649 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:11Z","lastTransitionTime":"2025-11-25T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.200133 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.200205 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.200227 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.200255 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.200277 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:11Z","lastTransitionTime":"2025-11-25T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.304438 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.304495 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.304511 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.304535 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.304559 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:11Z","lastTransitionTime":"2025-11-25T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.407500 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.407573 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.407597 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.407694 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.407720 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:11Z","lastTransitionTime":"2025-11-25T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.510646 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.510716 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.510730 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.510758 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.510770 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:11Z","lastTransitionTime":"2025-11-25T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.613266 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.613313 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.613324 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.613341 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.613352 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:11Z","lastTransitionTime":"2025-11-25T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.716424 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.716486 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.716503 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.716528 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.716545 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:11Z","lastTransitionTime":"2025-11-25T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.819537 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.819593 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.819642 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.819666 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.819685 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:11Z","lastTransitionTime":"2025-11-25T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.923147 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.923208 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.923220 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.923243 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.923256 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:11Z","lastTransitionTime":"2025-11-25T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:11 crc kubenswrapper[5043]: I1125 07:17:11.962463 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:11 crc kubenswrapper[5043]: E1125 07:17:11.962676 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.025553 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.025631 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.025645 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.025661 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.025676 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:12Z","lastTransitionTime":"2025-11-25T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.129317 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.129386 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.129406 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.129431 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.129449 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:12Z","lastTransitionTime":"2025-11-25T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.231888 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.231925 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.231935 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.231949 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.231959 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:12Z","lastTransitionTime":"2025-11-25T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.334140 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.334175 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.334186 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.334203 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.334216 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:12Z","lastTransitionTime":"2025-11-25T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.436714 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.436759 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.436770 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.436788 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.436799 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:12Z","lastTransitionTime":"2025-11-25T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.540424 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.540499 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.540523 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.540553 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.540573 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:12Z","lastTransitionTime":"2025-11-25T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.644103 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.644157 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.644174 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.644193 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.644207 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:12Z","lastTransitionTime":"2025-11-25T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.747194 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.747242 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.747255 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.747270 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.747280 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:12Z","lastTransitionTime":"2025-11-25T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.850583 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.850657 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.850669 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.850695 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.850710 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:12Z","lastTransitionTime":"2025-11-25T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.953452 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.953851 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.954090 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.954301 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.954510 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:12Z","lastTransitionTime":"2025-11-25T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.962251 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:12 crc kubenswrapper[5043]: E1125 07:17:12.962340 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.962373 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:12 crc kubenswrapper[5043]: E1125 07:17:12.962414 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:12 crc kubenswrapper[5043]: I1125 07:17:12.962437 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:12 crc kubenswrapper[5043]: E1125 07:17:12.962470 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.057952 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.057988 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.058004 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.058016 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.058026 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:13Z","lastTransitionTime":"2025-11-25T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.160332 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.160368 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.160379 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.160396 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.160407 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:13Z","lastTransitionTime":"2025-11-25T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.263422 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.263509 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.263534 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.263569 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.263592 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:13Z","lastTransitionTime":"2025-11-25T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.366874 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.367194 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.367374 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.367572 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.367850 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:13Z","lastTransitionTime":"2025-11-25T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.472474 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.472522 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.472539 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.472562 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.472578 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:13Z","lastTransitionTime":"2025-11-25T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.575736 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.575825 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.575866 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.575898 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.575921 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:13Z","lastTransitionTime":"2025-11-25T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.678530 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.678582 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.678590 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.678621 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.678631 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:13Z","lastTransitionTime":"2025-11-25T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.781713 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.781775 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.781792 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.781810 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.781823 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:13Z","lastTransitionTime":"2025-11-25T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.883766 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.883795 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.883804 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.883817 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.883825 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:13Z","lastTransitionTime":"2025-11-25T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.961889 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:13 crc kubenswrapper[5043]: E1125 07:17:13.962045 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.986480 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.986522 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.986536 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.986555 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:13 crc kubenswrapper[5043]: I1125 07:17:13.986570 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:13Z","lastTransitionTime":"2025-11-25T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.089259 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.089315 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.089332 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.089362 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.089380 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:14Z","lastTransitionTime":"2025-11-25T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.192645 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.192703 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.192724 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.192748 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.192765 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:14Z","lastTransitionTime":"2025-11-25T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.297123 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.297463 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.297652 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.297826 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.297960 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:14Z","lastTransitionTime":"2025-11-25T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.401658 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.401914 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.402014 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.402091 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.402152 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:14Z","lastTransitionTime":"2025-11-25T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.504473 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.504516 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.504529 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.504546 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.504586 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:14Z","lastTransitionTime":"2025-11-25T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.607489 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.607532 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.607547 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.607563 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.607575 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:14Z","lastTransitionTime":"2025-11-25T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.710354 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.710390 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.710398 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.710412 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.710419 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:14Z","lastTransitionTime":"2025-11-25T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.813050 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.813108 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.813125 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.813146 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.813161 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:14Z","lastTransitionTime":"2025-11-25T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.915066 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.915124 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.915139 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.915160 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.915174 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:14Z","lastTransitionTime":"2025-11-25T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.962688 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.962719 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:14 crc kubenswrapper[5043]: E1125 07:17:14.962813 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:14 crc kubenswrapper[5043]: I1125 07:17:14.962958 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:14 crc kubenswrapper[5043]: E1125 07:17:14.963008 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:14 crc kubenswrapper[5043]: E1125 07:17:14.963111 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.017900 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.017944 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.017952 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.017966 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.017975 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:15Z","lastTransitionTime":"2025-11-25T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.120381 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.120446 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.120457 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.120476 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.120488 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:15Z","lastTransitionTime":"2025-11-25T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.159146 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.159286 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.159307 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.159334 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.159386 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:15Z","lastTransitionTime":"2025-11-25T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:15 crc kubenswrapper[5043]: E1125 07:17:15.173436 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.177066 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.177137 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.177163 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.177192 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.177211 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:15Z","lastTransitionTime":"2025-11-25T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:15 crc kubenswrapper[5043]: E1125 07:17:15.192148 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.196512 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.196551 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.196564 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.196585 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.196598 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:15Z","lastTransitionTime":"2025-11-25T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:15 crc kubenswrapper[5043]: E1125 07:17:15.211054 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.215890 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.215959 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.215975 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.215994 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.216008 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:15Z","lastTransitionTime":"2025-11-25T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:15 crc kubenswrapper[5043]: E1125 07:17:15.233444 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.237707 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.237744 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.237755 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.237770 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.237782 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:15Z","lastTransitionTime":"2025-11-25T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:15 crc kubenswrapper[5043]: E1125 07:17:15.253745 5043 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T07:17:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7373f16d-4ee4-443d-bb12-9af926cc5ac2\\\",\\\"systemUUID\\\":\\\"3726d918-ef60-45bd-8631-a23c2ab917f8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:15Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:15 crc kubenswrapper[5043]: E1125 07:17:15.253884 5043 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.255453 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.255504 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.255517 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.255535 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.255546 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:15Z","lastTransitionTime":"2025-11-25T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.362624 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.362668 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.362681 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.362700 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.362711 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:15Z","lastTransitionTime":"2025-11-25T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.464694 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.464741 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.464755 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.464776 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.464793 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:15Z","lastTransitionTime":"2025-11-25T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.567129 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.567182 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.567198 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.567220 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.567243 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:15Z","lastTransitionTime":"2025-11-25T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.669970 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.670017 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.670029 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.670047 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.670059 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:15Z","lastTransitionTime":"2025-11-25T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.773248 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.773369 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.773391 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.773423 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.773445 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:15Z","lastTransitionTime":"2025-11-25T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.877376 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.877444 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.877464 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.877493 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.877513 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:15Z","lastTransitionTime":"2025-11-25T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.962317 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:15 crc kubenswrapper[5043]: E1125 07:17:15.962764 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.963178 5043 scope.go:117] "RemoveContainer" containerID="37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63" Nov 25 07:17:15 crc kubenswrapper[5043]: E1125 07:17:15.963415 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.980700 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.980741 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.980752 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.980767 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:15 crc kubenswrapper[5043]: I1125 07:17:15.980777 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:15Z","lastTransitionTime":"2025-11-25T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.083684 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.083745 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.083761 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.083785 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.083802 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:16Z","lastTransitionTime":"2025-11-25T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.186446 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.186489 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.186514 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.186538 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.186556 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:16Z","lastTransitionTime":"2025-11-25T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.289714 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.289754 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.289764 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.289782 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.289793 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:16Z","lastTransitionTime":"2025-11-25T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.392142 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.392200 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.392212 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.392230 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.392243 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:16Z","lastTransitionTime":"2025-11-25T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.495395 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.495460 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.495482 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.495529 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.495555 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:16Z","lastTransitionTime":"2025-11-25T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.599088 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.599154 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.599173 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.599201 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.599219 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:16Z","lastTransitionTime":"2025-11-25T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.702647 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.702760 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.702785 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.702811 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.702832 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:16Z","lastTransitionTime":"2025-11-25T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.806062 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.806121 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.806133 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.806151 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.806166 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:16Z","lastTransitionTime":"2025-11-25T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.909491 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.909536 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.909546 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.909566 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.909576 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:16Z","lastTransitionTime":"2025-11-25T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.961947 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.961957 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:16 crc kubenswrapper[5043]: E1125 07:17:16.962396 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.962779 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:16 crc kubenswrapper[5043]: E1125 07:17:16.962901 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:16 crc kubenswrapper[5043]: E1125 07:17:16.963045 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:16 crc kubenswrapper[5043]: I1125 07:17:16.984505 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:16Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.000102 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-btxpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da1f87c-5e0d-4f95-8cf0-b59a7c2273ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f678e1e553d8009b964ec86e1f720b8feefb0ecad0cb5d027bbb23f0495d240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jwqwt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-btxpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:16Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.012812 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.012852 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.012862 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.012893 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.012903 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:17Z","lastTransitionTime":"2025-11-25T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.020936 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e94ff5d7-6edd-457b-a3ce-daa78aa19170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://faaf975385c9f583d850c769c4b634f17f0e0b358fee121274a50c149726bf5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d44554d44407948ef46ff47a479ac1397fe7161d440f16b47ad78a707d97f58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.038093 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxj72" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07b25380-d8e4-4e3a-9f4c-01754e8b72f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c62ef322e0bb72dccb28149335a3bc7a3a050263ffade64a006bc087579b11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnwg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxj72\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.054908 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ce932f2-a1f0-4e68-8116-462d043d6a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40de1da3d89294cff4345cbf5cc2a3a08276c1e1a462c4515c78e9bc3123f277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7ec2e20e49766633390410a5fa037a78b4acba719fb663c16e5f99a68842f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqj7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-t545b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.069570 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e26eab68-d56e-4c83-9888-0a866e549524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbw7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xqj4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.092691 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0570cf4f-193c-46d2-9111-c8e894566f9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87cd284e540fd69567151f87c10d56e36db0418d35b0e76275ee4c89cd5ef0e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64ade3da94b38be558719f0f64064ffdcc89644412681389e6a1aa922256ee5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc65f28237ba1b07f7fd63fc2d636485a552796784da33fc06f780c41601fbcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://07c4cfbfea3dd39353da0162d738358fc0434b9d0f5fdc5abb47ae1202d7c243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7adb8fff5b8ee7969f5875d8bf2fc7645224a5c5cc18d39357136747ce0dc418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cebdaf0d8df87a5f942e6ce70eec693b101675639bd682b12e988568921e7cc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d12141ab2bfb05589d1eee715dbb92566225848ddc407bf9d61614ecaf6767c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a27a83fa937dacc18f3a277f83801c3a15c92a903df21f270ec5ac06797febf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.112746 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26bc8613-79e6-42d4-b2ae-fe1c78a750fe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"message\\\":\\\"le observer\\\\nW1125 07:15:56.767480 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 07:15:56.767581 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 07:15:56.768395 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3386916585/tls.crt::/tmp/serving-cert-3386916585/tls.key\\\\\\\"\\\\nI1125 07:15:57.175007 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 07:15:57.181480 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 07:15:57.181501 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 07:15:57.181519 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 07:15:57.181523 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 07:15:57.187396 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 07:15:57.187426 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187431 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 07:15:57.187436 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 07:15:57.187438 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 07:15:57.187441 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 07:15:57.187444 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 07:15:57.187582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 07:15:57.189193 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.116240 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.116291 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.116304 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.116326 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.116339 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:17Z","lastTransitionTime":"2025-11-25T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.131503 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5e60da4-d69e-4965-a7de-fc418d8c7e35\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://459e630834d6eb07547fb2984f5afea4c8217fcf87707512b5068a76f177529b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc465e0d8039aa52f8b2d6024f50a114bdc18cb1ffdf0889d0bdcc26e734059c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edae1215e66e3b84f899894b96ee37924b624bf1ef19662aeebec70dfa085c9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.147035 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.176214 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5gnzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aa0c167-9335-44ce-975c-715ce1f43383\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4066fa7f0a925be9090ea5c1746c5f49e5e16dbfbaf8855136d7417ba73fb59c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:16:51Z\\\",\\\"message\\\":\\\"2025-11-25T07:16:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e933b4bb-47fc-4136-b291-1892215953d7\\\\n2025-11-25T07:16:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e933b4bb-47fc-4136-b291-1892215953d7 to /host/opt/cni/bin/\\\\n2025-11-25T07:16:06Z [verbose] multus-daemon started\\\\n2025-11-25T07:16:06Z [verbose] Readiness Indicator file check\\\\n2025-11-25T07:16:51Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zn4bw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5gnzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.191441 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707b7a7f-020e-4719-9db9-7d1f3294b25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84bc3250639221b47ae71ae76fc13948c46c23aec40830453447972808066df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-68xp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jzwnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.208709 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01b1c815-0612-4834-85a8-4662893adcc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90d2930b8d1151527586f39a50b0a6d152fb745e1c840b4c03f7f24e0cad4f24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e7621d9164c52c15ef41ed35f42b4ff697ff8823e4bb52765b9d5b0ec20f22e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaa3ad9c132a13f686c23c228a50fb24b199dafebc9b215c53ba2c91e651582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b84f6068b14a84a569edff07a89bfda2aeeae76dc90d1c5c35f18b9cba96e54e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf43a5343cecc588ed513177121e8fd454f8ee744c3f25ae5b71d325224d7f9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88fc153fe8db6e17593836a863d845c047a612a7f7502a84718d019ad7e1680c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9facf873ab27edefea39cd30abd54a16fedfe299cc205a9e1ce27fa3fb21009b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2q4w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pbsfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.219263 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.219347 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.219373 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.219407 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.219432 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:17Z","lastTransitionTime":"2025-11-25T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.221904 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed17eb56-5921-4618-8de7-166c01019089\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb1448111cb3e3b27389baafd33293fcb690b89e0f54007afba41778c91cb8cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2869a4db622ee8d96a52f7c058914b01302bbeac8b81ed67aa9c87f77a7f7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5604ac4dec090e082d1843bc46f0857aad493c97e1d91208a938e7405333a67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f14eee00c430ac65346b6ee7b898a68ea580bd40bab0274f21e518426e05ad96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:15:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:15:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:15:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.236346 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.254562 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbfdbaaddbcd26e54791b948d4ae17c8cf0ae1f11f909588538f99f84008fd71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.268007 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:15:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fe07f3cf32b5beb270984eb967f10c81e40fb3af84f0074bb1f422fc595d13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://070a45e2ce84e82106f87e286b148b68a667a5a488628a5bc859721bef84aaa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:15:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.278635 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a582a34a04bdf8c2a52915b0dd37ca20318575f642214076babf85dcd43d6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.299818 5043 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8785a4c-82ff-4a78-83a0-463e977df530\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T07:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T07:17:03Z\\\",\\\"message\\\":\\\"ct:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1125 07:17:02.968098 7157 ovnkube.go:599] Stopped ovnkube\\\\nI1125 07:17:02.967944 7157 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1125 07:17:02.968197 7157 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 07:17:02.968285 7157 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T07:17:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T07:16:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T07:16:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T07:16:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrzpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T07:16:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-m5zz6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T07:17:17Z is after 2025-08-24T17:21:41Z" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.322672 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.322717 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.322726 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.322741 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.322751 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:17Z","lastTransitionTime":"2025-11-25T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.426522 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.426593 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.426660 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.426686 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.426703 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:17Z","lastTransitionTime":"2025-11-25T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.530257 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.530316 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.530328 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.530346 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.530359 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:17Z","lastTransitionTime":"2025-11-25T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.633255 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.633329 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.633357 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.633390 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.633414 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:17Z","lastTransitionTime":"2025-11-25T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.736792 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.736831 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.736838 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.736852 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.736862 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:17Z","lastTransitionTime":"2025-11-25T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.840872 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.840934 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.840956 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.840986 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.841008 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:17Z","lastTransitionTime":"2025-11-25T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.944722 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.944787 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.944804 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.944829 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.944850 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:17Z","lastTransitionTime":"2025-11-25T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:17 crc kubenswrapper[5043]: I1125 07:17:17.962519 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:17 crc kubenswrapper[5043]: E1125 07:17:17.962778 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.047971 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.048068 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.048097 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.048128 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.048151 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:18Z","lastTransitionTime":"2025-11-25T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.151591 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.151680 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.151699 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.151723 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.151744 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:18Z","lastTransitionTime":"2025-11-25T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.255475 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.255540 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.255557 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.255584 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.255625 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:18Z","lastTransitionTime":"2025-11-25T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.357706 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.357769 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.357786 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.357812 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.357831 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:18Z","lastTransitionTime":"2025-11-25T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.461858 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.461933 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.461944 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.461965 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.461979 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:18Z","lastTransitionTime":"2025-11-25T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.564965 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.565035 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.565053 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.565079 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.565097 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:18Z","lastTransitionTime":"2025-11-25T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.668082 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.668178 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.668221 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.668253 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.668276 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:18Z","lastTransitionTime":"2025-11-25T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.771652 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.771728 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.771750 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.771779 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.771800 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:18Z","lastTransitionTime":"2025-11-25T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.874844 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.874915 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.874949 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.874978 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.875003 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:18Z","lastTransitionTime":"2025-11-25T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.962767 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.963057 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:18 crc kubenswrapper[5043]: E1125 07:17:18.963145 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.963230 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:18 crc kubenswrapper[5043]: E1125 07:17:18.963311 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:18 crc kubenswrapper[5043]: E1125 07:17:18.963402 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.977224 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.977276 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.977291 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.977310 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:18 crc kubenswrapper[5043]: I1125 07:17:18.977326 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:18Z","lastTransitionTime":"2025-11-25T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.080981 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.081067 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.081091 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.081136 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.081160 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:19Z","lastTransitionTime":"2025-11-25T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.185281 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.185339 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.185357 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.185380 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.185398 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:19Z","lastTransitionTime":"2025-11-25T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.288560 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.288684 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.288712 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.288739 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.288758 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:19Z","lastTransitionTime":"2025-11-25T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.392553 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.392662 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.392682 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.392734 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.392755 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:19Z","lastTransitionTime":"2025-11-25T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.495300 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.495361 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.495399 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.495429 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.495451 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:19Z","lastTransitionTime":"2025-11-25T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.597914 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.597970 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.597993 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.598021 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.598041 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:19Z","lastTransitionTime":"2025-11-25T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.701190 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.701251 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.701270 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.701294 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.701311 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:19Z","lastTransitionTime":"2025-11-25T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.804500 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.804563 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.804586 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.804651 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.804679 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:19Z","lastTransitionTime":"2025-11-25T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.907549 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.907694 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.907728 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.907761 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.907781 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:19Z","lastTransitionTime":"2025-11-25T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:19 crc kubenswrapper[5043]: I1125 07:17:19.961858 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:19 crc kubenswrapper[5043]: E1125 07:17:19.962064 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.011105 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.011150 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.011161 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.011177 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.011187 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:20Z","lastTransitionTime":"2025-11-25T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.113650 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.113719 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.113738 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.113769 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.113792 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:20Z","lastTransitionTime":"2025-11-25T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.217276 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.217337 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.217359 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.217384 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.217403 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:20Z","lastTransitionTime":"2025-11-25T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.320206 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.320282 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.320347 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.320370 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.320390 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:20Z","lastTransitionTime":"2025-11-25T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.424243 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.424315 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.424340 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.424375 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.424400 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:20Z","lastTransitionTime":"2025-11-25T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.527064 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.527125 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.527142 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.527169 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.527188 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:20Z","lastTransitionTime":"2025-11-25T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.630031 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.630167 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.630188 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.630214 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.630234 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:20Z","lastTransitionTime":"2025-11-25T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.734198 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.734274 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.734291 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.734321 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.734343 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:20Z","lastTransitionTime":"2025-11-25T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.837083 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.837119 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.837128 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.837144 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.837154 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:20Z","lastTransitionTime":"2025-11-25T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.940487 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.940536 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.940547 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.940565 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.940577 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:20Z","lastTransitionTime":"2025-11-25T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.962290 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.962329 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:20 crc kubenswrapper[5043]: E1125 07:17:20.962549 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:20 crc kubenswrapper[5043]: I1125 07:17:20.962350 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:20 crc kubenswrapper[5043]: E1125 07:17:20.962642 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:20 crc kubenswrapper[5043]: E1125 07:17:20.962764 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.043584 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.043732 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.043754 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.043785 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.043849 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:21Z","lastTransitionTime":"2025-11-25T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.145839 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.145894 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.145905 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.145922 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.145934 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:21Z","lastTransitionTime":"2025-11-25T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.249047 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.249111 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.249130 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.249153 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.249171 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:21Z","lastTransitionTime":"2025-11-25T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.352735 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.352809 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.352825 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.352866 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.352881 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:21Z","lastTransitionTime":"2025-11-25T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.455834 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.455914 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.455936 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.455969 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.456014 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:21Z","lastTransitionTime":"2025-11-25T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.559429 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.559484 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.559500 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.559524 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.559543 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:21Z","lastTransitionTime":"2025-11-25T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.636537 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs\") pod \"network-metrics-daemon-xqj4m\" (UID: \"e26eab68-d56e-4c83-9888-0a866e549524\") " pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:21 crc kubenswrapper[5043]: E1125 07:17:21.636942 5043 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 07:17:21 crc kubenswrapper[5043]: E1125 07:17:21.637170 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs podName:e26eab68-d56e-4c83-9888-0a866e549524 nodeName:}" failed. No retries permitted until 2025-11-25 07:18:25.63702996 +0000 UTC m=+169.805225721 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs") pod "network-metrics-daemon-xqj4m" (UID: "e26eab68-d56e-4c83-9888-0a866e549524") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.662218 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.662264 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.662276 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.662295 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.662305 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:21Z","lastTransitionTime":"2025-11-25T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.765348 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.765398 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.765420 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.765444 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.765461 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:21Z","lastTransitionTime":"2025-11-25T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.868912 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.869303 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.869368 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.869409 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.869453 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:21Z","lastTransitionTime":"2025-11-25T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.962725 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:21 crc kubenswrapper[5043]: E1125 07:17:21.962979 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.974588 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.974676 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.974691 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.974746 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:21 crc kubenswrapper[5043]: I1125 07:17:21.974762 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:21Z","lastTransitionTime":"2025-11-25T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.077436 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.077478 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.077493 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.077513 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.077527 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:22Z","lastTransitionTime":"2025-11-25T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.179949 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.180014 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.180037 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.180069 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.180092 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:22Z","lastTransitionTime":"2025-11-25T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.282730 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.282769 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.282779 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.282793 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.282832 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:22Z","lastTransitionTime":"2025-11-25T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.385588 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.385718 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.385743 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.385776 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.385800 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:22Z","lastTransitionTime":"2025-11-25T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.488978 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.489033 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.489047 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.489066 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.489078 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:22Z","lastTransitionTime":"2025-11-25T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.591736 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.591822 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.591842 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.591872 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.591894 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:22Z","lastTransitionTime":"2025-11-25T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.695571 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.695708 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.695737 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.695773 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.695797 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:22Z","lastTransitionTime":"2025-11-25T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.799430 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.799501 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.799517 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.799542 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.799560 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:22Z","lastTransitionTime":"2025-11-25T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.902551 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.902631 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.902651 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.902674 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.902692 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:22Z","lastTransitionTime":"2025-11-25T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.963371 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.963441 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:22 crc kubenswrapper[5043]: I1125 07:17:22.963371 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:22 crc kubenswrapper[5043]: E1125 07:17:22.963538 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:22 crc kubenswrapper[5043]: E1125 07:17:22.963681 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:22 crc kubenswrapper[5043]: E1125 07:17:22.963788 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.005324 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.005392 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.005415 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.005444 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.005467 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:23Z","lastTransitionTime":"2025-11-25T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.107185 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.107232 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.107243 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.107261 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.107276 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:23Z","lastTransitionTime":"2025-11-25T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.210176 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.210211 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.210220 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.210264 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.210274 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:23Z","lastTransitionTime":"2025-11-25T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.312170 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.312246 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.312261 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.312277 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.312287 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:23Z","lastTransitionTime":"2025-11-25T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.415133 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.415183 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.415199 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.415222 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.415238 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:23Z","lastTransitionTime":"2025-11-25T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.517818 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.517857 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.517868 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.517882 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.517892 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:23Z","lastTransitionTime":"2025-11-25T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.620968 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.621031 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.621049 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.621071 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.621088 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:23Z","lastTransitionTime":"2025-11-25T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.725165 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.725225 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.725252 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.725286 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.725312 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:23Z","lastTransitionTime":"2025-11-25T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.828320 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.828362 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.828374 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.828391 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.828405 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:23Z","lastTransitionTime":"2025-11-25T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.932056 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.932130 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.932153 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.932180 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.932200 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:23Z","lastTransitionTime":"2025-11-25T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:23 crc kubenswrapper[5043]: I1125 07:17:23.961686 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:23 crc kubenswrapper[5043]: E1125 07:17:23.961944 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.035026 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.035114 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.035129 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.035145 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.035157 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:24Z","lastTransitionTime":"2025-11-25T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.138107 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.138143 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.138152 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.138167 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.138176 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:24Z","lastTransitionTime":"2025-11-25T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.240693 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.240743 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.240754 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.240772 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.240784 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:24Z","lastTransitionTime":"2025-11-25T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.342699 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.342775 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.342799 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.342824 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.342841 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:24Z","lastTransitionTime":"2025-11-25T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.451358 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.451431 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.451451 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.451479 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.451507 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:24Z","lastTransitionTime":"2025-11-25T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.554825 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.554870 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.554878 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.554893 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.554904 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:24Z","lastTransitionTime":"2025-11-25T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.658036 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.658078 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.658090 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.658106 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.658118 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:24Z","lastTransitionTime":"2025-11-25T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.760596 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.760644 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.760654 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.760669 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.760682 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:24Z","lastTransitionTime":"2025-11-25T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.863519 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.863988 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.864151 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.864297 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.864422 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:24Z","lastTransitionTime":"2025-11-25T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.962483 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.962498 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.962687 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:24 crc kubenswrapper[5043]: E1125 07:17:24.962856 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:24 crc kubenswrapper[5043]: E1125 07:17:24.963045 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:24 crc kubenswrapper[5043]: E1125 07:17:24.963150 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.967796 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.967957 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.967987 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.968017 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:24 crc kubenswrapper[5043]: I1125 07:17:24.968039 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:24Z","lastTransitionTime":"2025-11-25T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.070552 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.070652 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.070700 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.070731 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.070756 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:25Z","lastTransitionTime":"2025-11-25T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.173988 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.174082 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.174118 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.174156 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.174179 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:25Z","lastTransitionTime":"2025-11-25T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.276236 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.276283 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.276297 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.276315 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.276329 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:25Z","lastTransitionTime":"2025-11-25T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.379482 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.379782 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.379803 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.379830 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.379846 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:25Z","lastTransitionTime":"2025-11-25T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.423672 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.423809 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.423838 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.423912 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.423939 5043 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T07:17:25Z","lastTransitionTime":"2025-11-25T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.496918 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966"] Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.497392 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.501257 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.501637 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.501833 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.502307 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.519161 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=39.519143136 podStartE2EDuration="39.519143136s" podCreationTimestamp="2025-11-25 07:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:17:25.518347014 +0000 UTC m=+109.686542745" watchObservedRunningTime="2025-11-25 07:17:25.519143136 +0000 UTC m=+109.687338867" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.544848 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fxj72" podStartSLOduration=83.544815339 podStartE2EDuration="1m23.544815339s" podCreationTimestamp="2025-11-25 07:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:17:25.52930546 +0000 UTC m=+109.697501201" watchObservedRunningTime="2025-11-25 07:17:25.544815339 +0000 UTC m=+109.713011100" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.568719 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3da65c1f-c011-413c-bd50-6cb84e72c7cf-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qc966\" (UID: \"3da65c1f-c011-413c-bd50-6cb84e72c7cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.568820 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3da65c1f-c011-413c-bd50-6cb84e72c7cf-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qc966\" (UID: \"3da65c1f-c011-413c-bd50-6cb84e72c7cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.568848 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3da65c1f-c011-413c-bd50-6cb84e72c7cf-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qc966\" (UID: \"3da65c1f-c011-413c-bd50-6cb84e72c7cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.568870 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3da65c1f-c011-413c-bd50-6cb84e72c7cf-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qc966\" (UID: \"3da65c1f-c011-413c-bd50-6cb84e72c7cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.569165 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3da65c1f-c011-413c-bd50-6cb84e72c7cf-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qc966\" (UID: \"3da65c1f-c011-413c-bd50-6cb84e72c7cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.571587 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-t545b" podStartSLOduration=82.571561182 podStartE2EDuration="1m22.571561182s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:17:25.545029655 +0000 UTC m=+109.713225376" watchObservedRunningTime="2025-11-25 07:17:25.571561182 +0000 UTC m=+109.739756913" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.641483 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=88.641466471 podStartE2EDuration="1m28.641466471s" podCreationTimestamp="2025-11-25 07:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:17:25.625538771 +0000 UTC m=+109.793734502" watchObservedRunningTime="2025-11-25 07:17:25.641466471 +0000 UTC m=+109.809662202" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.641741 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=87.641735669 podStartE2EDuration="1m27.641735669s" podCreationTimestamp="2025-11-25 07:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:17:25.641655007 +0000 UTC m=+109.809850748" watchObservedRunningTime="2025-11-25 07:17:25.641735669 +0000 UTC m=+109.809931400" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.658900 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.658871632 podStartE2EDuration="1m28.658871632s" podCreationTimestamp="2025-11-25 07:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:17:25.65808881 +0000 UTC m=+109.826284541" watchObservedRunningTime="2025-11-25 07:17:25.658871632 +0000 UTC m=+109.827067393" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.670102 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3da65c1f-c011-413c-bd50-6cb84e72c7cf-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qc966\" (UID: \"3da65c1f-c011-413c-bd50-6cb84e72c7cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.670157 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3da65c1f-c011-413c-bd50-6cb84e72c7cf-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qc966\" (UID: \"3da65c1f-c011-413c-bd50-6cb84e72c7cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.670193 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3da65c1f-c011-413c-bd50-6cb84e72c7cf-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qc966\" (UID: \"3da65c1f-c011-413c-bd50-6cb84e72c7cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.670231 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3da65c1f-c011-413c-bd50-6cb84e72c7cf-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qc966\" (UID: \"3da65c1f-c011-413c-bd50-6cb84e72c7cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.670270 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3da65c1f-c011-413c-bd50-6cb84e72c7cf-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qc966\" (UID: \"3da65c1f-c011-413c-bd50-6cb84e72c7cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.670321 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3da65c1f-c011-413c-bd50-6cb84e72c7cf-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qc966\" (UID: \"3da65c1f-c011-413c-bd50-6cb84e72c7cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.670419 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3da65c1f-c011-413c-bd50-6cb84e72c7cf-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qc966\" (UID: \"3da65c1f-c011-413c-bd50-6cb84e72c7cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.671132 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3da65c1f-c011-413c-bd50-6cb84e72c7cf-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qc966\" (UID: \"3da65c1f-c011-413c-bd50-6cb84e72c7cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.678325 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3da65c1f-c011-413c-bd50-6cb84e72c7cf-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qc966\" (UID: \"3da65c1f-c011-413c-bd50-6cb84e72c7cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.691629 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-5gnzs" podStartSLOduration=82.691596785 podStartE2EDuration="1m22.691596785s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:17:25.691066481 +0000 UTC m=+109.859262212" watchObservedRunningTime="2025-11-25 07:17:25.691596785 +0000 UTC m=+109.859792516" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.691966 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3da65c1f-c011-413c-bd50-6cb84e72c7cf-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qc966\" (UID: \"3da65c1f-c011-413c-bd50-6cb84e72c7cf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.723469 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podStartSLOduration=83.723451786 podStartE2EDuration="1m23.723451786s" podCreationTimestamp="2025-11-25 07:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:17:25.707798983 +0000 UTC m=+109.875994724" watchObservedRunningTime="2025-11-25 07:17:25.723451786 +0000 UTC m=+109.891647507" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.723584 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-pbsfz" podStartSLOduration=82.723579029 podStartE2EDuration="1m22.723579029s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:17:25.723109327 +0000 UTC m=+109.891305058" watchObservedRunningTime="2025-11-25 07:17:25.723579029 +0000 UTC m=+109.891774750" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.734867 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=60.734838904 podStartE2EDuration="1m0.734838904s" podCreationTimestamp="2025-11-25 07:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:17:25.733683492 +0000 UTC m=+109.901879223" watchObservedRunningTime="2025-11-25 07:17:25.734838904 +0000 UTC m=+109.903034625" Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.812389 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" Nov 25 07:17:25 crc kubenswrapper[5043]: W1125 07:17:25.828686 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da65c1f_c011_413c_bd50_6cb84e72c7cf.slice/crio-54cb25833a2e0e44aaa7a2af512d3f0ae3f7b236c7def6cf53076530f4acfcba WatchSource:0}: Error finding container 54cb25833a2e0e44aaa7a2af512d3f0ae3f7b236c7def6cf53076530f4acfcba: Status 404 returned error can't find the container with id 54cb25833a2e0e44aaa7a2af512d3f0ae3f7b236c7def6cf53076530f4acfcba Nov 25 07:17:25 crc kubenswrapper[5043]: I1125 07:17:25.962551 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:25 crc kubenswrapper[5043]: E1125 07:17:25.962717 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:26 crc kubenswrapper[5043]: I1125 07:17:26.589805 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" event={"ID":"3da65c1f-c011-413c-bd50-6cb84e72c7cf","Type":"ContainerStarted","Data":"caeb743ace76ee12c287ee794241e7e1a68c71cd48a3ffb1ef620b6c6c2e832d"} Nov 25 07:17:26 crc kubenswrapper[5043]: I1125 07:17:26.590454 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" event={"ID":"3da65c1f-c011-413c-bd50-6cb84e72c7cf","Type":"ContainerStarted","Data":"54cb25833a2e0e44aaa7a2af512d3f0ae3f7b236c7def6cf53076530f4acfcba"} Nov 25 07:17:26 crc kubenswrapper[5043]: I1125 07:17:26.608956 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-btxpx" podStartSLOduration=84.60893413 podStartE2EDuration="1m24.60893413s" podCreationTimestamp="2025-11-25 07:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:17:25.828823863 +0000 UTC m=+109.997019584" watchObservedRunningTime="2025-11-25 07:17:26.60893413 +0000 UTC m=+110.777129871" Nov 25 07:17:26 crc kubenswrapper[5043]: I1125 07:17:26.962276 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:26 crc kubenswrapper[5043]: I1125 07:17:26.963567 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:26 crc kubenswrapper[5043]: I1125 07:17:26.963573 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:26 crc kubenswrapper[5043]: E1125 07:17:26.963718 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:26 crc kubenswrapper[5043]: E1125 07:17:26.963819 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:26 crc kubenswrapper[5043]: E1125 07:17:26.963890 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:26 crc kubenswrapper[5043]: I1125 07:17:26.964002 5043 scope.go:117] "RemoveContainer" containerID="37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63" Nov 25 07:17:26 crc kubenswrapper[5043]: E1125 07:17:26.964728 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" Nov 25 07:17:27 crc kubenswrapper[5043]: I1125 07:17:27.962511 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:27 crc kubenswrapper[5043]: E1125 07:17:27.962690 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:28 crc kubenswrapper[5043]: I1125 07:17:28.962192 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:28 crc kubenswrapper[5043]: E1125 07:17:28.962310 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:28 crc kubenswrapper[5043]: I1125 07:17:28.962475 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:28 crc kubenswrapper[5043]: E1125 07:17:28.962516 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:28 crc kubenswrapper[5043]: I1125 07:17:28.962631 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:28 crc kubenswrapper[5043]: E1125 07:17:28.962682 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:29 crc kubenswrapper[5043]: I1125 07:17:29.962345 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:29 crc kubenswrapper[5043]: E1125 07:17:29.962470 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:30 crc kubenswrapper[5043]: I1125 07:17:30.962694 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:30 crc kubenswrapper[5043]: I1125 07:17:30.963188 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:30 crc kubenswrapper[5043]: I1125 07:17:30.963257 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:30 crc kubenswrapper[5043]: E1125 07:17:30.963399 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:30 crc kubenswrapper[5043]: E1125 07:17:30.963485 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:30 crc kubenswrapper[5043]: E1125 07:17:30.963549 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:31 crc kubenswrapper[5043]: I1125 07:17:31.962699 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:31 crc kubenswrapper[5043]: E1125 07:17:31.962906 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:32 crc kubenswrapper[5043]: I1125 07:17:32.962121 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:32 crc kubenswrapper[5043]: I1125 07:17:32.962153 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:32 crc kubenswrapper[5043]: I1125 07:17:32.962255 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:32 crc kubenswrapper[5043]: E1125 07:17:32.962391 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:32 crc kubenswrapper[5043]: E1125 07:17:32.962551 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:32 crc kubenswrapper[5043]: E1125 07:17:32.962765 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:33 crc kubenswrapper[5043]: I1125 07:17:33.962682 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:33 crc kubenswrapper[5043]: E1125 07:17:33.962883 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:34 crc kubenswrapper[5043]: I1125 07:17:34.961744 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:34 crc kubenswrapper[5043]: E1125 07:17:34.962141 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:34 crc kubenswrapper[5043]: I1125 07:17:34.961933 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:34 crc kubenswrapper[5043]: I1125 07:17:34.961884 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:34 crc kubenswrapper[5043]: E1125 07:17:34.962218 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:34 crc kubenswrapper[5043]: E1125 07:17:34.962377 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:35 crc kubenswrapper[5043]: I1125 07:17:35.962704 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:35 crc kubenswrapper[5043]: E1125 07:17:35.962952 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:36 crc kubenswrapper[5043]: E1125 07:17:36.918184 5043 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 25 07:17:36 crc kubenswrapper[5043]: I1125 07:17:36.961957 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:36 crc kubenswrapper[5043]: I1125 07:17:36.961991 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:36 crc kubenswrapper[5043]: I1125 07:17:36.962097 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:36 crc kubenswrapper[5043]: E1125 07:17:36.963969 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:36 crc kubenswrapper[5043]: E1125 07:17:36.964942 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:36 crc kubenswrapper[5043]: E1125 07:17:36.965070 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:37 crc kubenswrapper[5043]: E1125 07:17:37.100292 5043 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 07:17:37 crc kubenswrapper[5043]: I1125 07:17:37.962408 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:37 crc kubenswrapper[5043]: E1125 07:17:37.963247 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:38 crc kubenswrapper[5043]: I1125 07:17:38.639804 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5gnzs_6aa0c167-9335-44ce-975c-715ce1f43383/kube-multus/1.log" Nov 25 07:17:38 crc kubenswrapper[5043]: I1125 07:17:38.640434 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5gnzs_6aa0c167-9335-44ce-975c-715ce1f43383/kube-multus/0.log" Nov 25 07:17:38 crc kubenswrapper[5043]: I1125 07:17:38.640468 5043 generic.go:334] "Generic (PLEG): container finished" podID="6aa0c167-9335-44ce-975c-715ce1f43383" containerID="4066fa7f0a925be9090ea5c1746c5f49e5e16dbfbaf8855136d7417ba73fb59c" exitCode=1 Nov 25 07:17:38 crc kubenswrapper[5043]: I1125 07:17:38.640500 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5gnzs" event={"ID":"6aa0c167-9335-44ce-975c-715ce1f43383","Type":"ContainerDied","Data":"4066fa7f0a925be9090ea5c1746c5f49e5e16dbfbaf8855136d7417ba73fb59c"} Nov 25 07:17:38 crc kubenswrapper[5043]: I1125 07:17:38.640538 5043 scope.go:117] "RemoveContainer" containerID="c457c7ab6d275600b2bcc5063807bf4e172f8d0551752cab57654e374ac2242a" Nov 25 07:17:38 crc kubenswrapper[5043]: I1125 07:17:38.640971 5043 scope.go:117] "RemoveContainer" containerID="4066fa7f0a925be9090ea5c1746c5f49e5e16dbfbaf8855136d7417ba73fb59c" Nov 25 07:17:38 crc kubenswrapper[5043]: E1125 07:17:38.641126 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-5gnzs_openshift-multus(6aa0c167-9335-44ce-975c-715ce1f43383)\"" pod="openshift-multus/multus-5gnzs" podUID="6aa0c167-9335-44ce-975c-715ce1f43383" Nov 25 07:17:38 crc kubenswrapper[5043]: I1125 07:17:38.666574 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qc966" podStartSLOduration=95.666550743 podStartE2EDuration="1m35.666550743s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:17:26.608754675 +0000 UTC m=+110.776950406" watchObservedRunningTime="2025-11-25 07:17:38.666550743 +0000 UTC m=+122.834746474" Nov 25 07:17:38 crc kubenswrapper[5043]: I1125 07:17:38.962867 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:38 crc kubenswrapper[5043]: E1125 07:17:38.963086 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:38 crc kubenswrapper[5043]: I1125 07:17:38.962887 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:38 crc kubenswrapper[5043]: I1125 07:17:38.963188 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:38 crc kubenswrapper[5043]: E1125 07:17:38.963350 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:38 crc kubenswrapper[5043]: E1125 07:17:38.963494 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:39 crc kubenswrapper[5043]: I1125 07:17:39.646220 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5gnzs_6aa0c167-9335-44ce-975c-715ce1f43383/kube-multus/1.log" Nov 25 07:17:39 crc kubenswrapper[5043]: I1125 07:17:39.962128 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:39 crc kubenswrapper[5043]: E1125 07:17:39.962338 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:40 crc kubenswrapper[5043]: I1125 07:17:40.962261 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:40 crc kubenswrapper[5043]: I1125 07:17:40.962317 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:40 crc kubenswrapper[5043]: E1125 07:17:40.962496 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:40 crc kubenswrapper[5043]: I1125 07:17:40.962855 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:40 crc kubenswrapper[5043]: E1125 07:17:40.962929 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:40 crc kubenswrapper[5043]: E1125 07:17:40.963013 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:41 crc kubenswrapper[5043]: I1125 07:17:41.962477 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:41 crc kubenswrapper[5043]: E1125 07:17:41.962929 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:41 crc kubenswrapper[5043]: I1125 07:17:41.963130 5043 scope.go:117] "RemoveContainer" containerID="37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63" Nov 25 07:17:41 crc kubenswrapper[5043]: E1125 07:17:41.963303 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-m5zz6_openshift-ovn-kubernetes(a8785a4c-82ff-4a78-83a0-463e977df530)\"" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" Nov 25 07:17:42 crc kubenswrapper[5043]: E1125 07:17:42.101941 5043 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 07:17:42 crc kubenswrapper[5043]: I1125 07:17:42.961925 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:42 crc kubenswrapper[5043]: I1125 07:17:42.961971 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:42 crc kubenswrapper[5043]: I1125 07:17:42.962230 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:42 crc kubenswrapper[5043]: E1125 07:17:42.962378 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:42 crc kubenswrapper[5043]: E1125 07:17:42.962635 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:42 crc kubenswrapper[5043]: E1125 07:17:42.962688 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:43 crc kubenswrapper[5043]: I1125 07:17:43.961828 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:43 crc kubenswrapper[5043]: E1125 07:17:43.962106 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:44 crc kubenswrapper[5043]: I1125 07:17:44.962632 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:44 crc kubenswrapper[5043]: I1125 07:17:44.962717 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:44 crc kubenswrapper[5043]: E1125 07:17:44.962764 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:44 crc kubenswrapper[5043]: I1125 07:17:44.962630 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:44 crc kubenswrapper[5043]: E1125 07:17:44.962950 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:44 crc kubenswrapper[5043]: E1125 07:17:44.962972 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:45 crc kubenswrapper[5043]: I1125 07:17:45.962202 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:45 crc kubenswrapper[5043]: E1125 07:17:45.962395 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:46 crc kubenswrapper[5043]: I1125 07:17:46.962244 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:46 crc kubenswrapper[5043]: I1125 07:17:46.962255 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:46 crc kubenswrapper[5043]: I1125 07:17:46.962307 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:46 crc kubenswrapper[5043]: E1125 07:17:46.963691 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:46 crc kubenswrapper[5043]: E1125 07:17:46.963780 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:46 crc kubenswrapper[5043]: E1125 07:17:46.963822 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:47 crc kubenswrapper[5043]: E1125 07:17:47.103639 5043 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 07:17:47 crc kubenswrapper[5043]: I1125 07:17:47.962122 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:47 crc kubenswrapper[5043]: E1125 07:17:47.962565 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:48 crc kubenswrapper[5043]: I1125 07:17:48.962190 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:48 crc kubenswrapper[5043]: I1125 07:17:48.962286 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:48 crc kubenswrapper[5043]: I1125 07:17:48.962297 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:48 crc kubenswrapper[5043]: E1125 07:17:48.962384 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:48 crc kubenswrapper[5043]: E1125 07:17:48.962489 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:48 crc kubenswrapper[5043]: E1125 07:17:48.962668 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:49 crc kubenswrapper[5043]: I1125 07:17:49.962663 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:49 crc kubenswrapper[5043]: E1125 07:17:49.962863 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:50 crc kubenswrapper[5043]: I1125 07:17:50.962742 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:50 crc kubenswrapper[5043]: I1125 07:17:50.962823 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:50 crc kubenswrapper[5043]: I1125 07:17:50.962778 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:50 crc kubenswrapper[5043]: E1125 07:17:50.962936 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:50 crc kubenswrapper[5043]: E1125 07:17:50.963221 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:50 crc kubenswrapper[5043]: E1125 07:17:50.963479 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:51 crc kubenswrapper[5043]: I1125 07:17:51.962348 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:51 crc kubenswrapper[5043]: E1125 07:17:51.962533 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:52 crc kubenswrapper[5043]: E1125 07:17:52.105020 5043 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 07:17:52 crc kubenswrapper[5043]: I1125 07:17:52.962764 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:52 crc kubenswrapper[5043]: I1125 07:17:52.962846 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:52 crc kubenswrapper[5043]: I1125 07:17:52.962890 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:52 crc kubenswrapper[5043]: E1125 07:17:52.963004 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:52 crc kubenswrapper[5043]: E1125 07:17:52.963177 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:52 crc kubenswrapper[5043]: E1125 07:17:52.963471 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:52 crc kubenswrapper[5043]: I1125 07:17:52.963781 5043 scope.go:117] "RemoveContainer" containerID="4066fa7f0a925be9090ea5c1746c5f49e5e16dbfbaf8855136d7417ba73fb59c" Nov 25 07:17:53 crc kubenswrapper[5043]: I1125 07:17:53.696543 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5gnzs_6aa0c167-9335-44ce-975c-715ce1f43383/kube-multus/1.log" Nov 25 07:17:53 crc kubenswrapper[5043]: I1125 07:17:53.696929 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5gnzs" event={"ID":"6aa0c167-9335-44ce-975c-715ce1f43383","Type":"ContainerStarted","Data":"b38ec2c1857f8d09dc1e1bf719e08fa2ba97a2d42a1d582846b46e950df84a94"} Nov 25 07:17:53 crc kubenswrapper[5043]: I1125 07:17:53.961672 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:53 crc kubenswrapper[5043]: E1125 07:17:53.961875 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:54 crc kubenswrapper[5043]: I1125 07:17:54.962113 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:54 crc kubenswrapper[5043]: I1125 07:17:54.962188 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:54 crc kubenswrapper[5043]: I1125 07:17:54.962277 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:54 crc kubenswrapper[5043]: E1125 07:17:54.962467 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:54 crc kubenswrapper[5043]: I1125 07:17:54.962649 5043 scope.go:117] "RemoveContainer" containerID="37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63" Nov 25 07:17:54 crc kubenswrapper[5043]: E1125 07:17:54.962683 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:54 crc kubenswrapper[5043]: E1125 07:17:54.962774 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:55 crc kubenswrapper[5043]: I1125 07:17:55.703931 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovnkube-controller/3.log" Nov 25 07:17:55 crc kubenswrapper[5043]: I1125 07:17:55.707015 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerStarted","Data":"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178"} Nov 25 07:17:55 crc kubenswrapper[5043]: I1125 07:17:55.707571 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:17:55 crc kubenswrapper[5043]: I1125 07:17:55.730224 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podStartSLOduration=112.730206211 podStartE2EDuration="1m52.730206211s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:17:55.729233205 +0000 UTC m=+139.897428946" watchObservedRunningTime="2025-11-25 07:17:55.730206211 +0000 UTC m=+139.898401932" Nov 25 07:17:55 crc kubenswrapper[5043]: I1125 07:17:55.870667 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xqj4m"] Nov 25 07:17:55 crc kubenswrapper[5043]: I1125 07:17:55.870755 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:55 crc kubenswrapper[5043]: E1125 07:17:55.870839 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:56 crc kubenswrapper[5043]: I1125 07:17:56.962107 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:56 crc kubenswrapper[5043]: I1125 07:17:56.962211 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:56 crc kubenswrapper[5043]: I1125 07:17:56.962328 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:56 crc kubenswrapper[5043]: E1125 07:17:56.963246 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:56 crc kubenswrapper[5043]: E1125 07:17:56.963345 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:56 crc kubenswrapper[5043]: E1125 07:17:56.963415 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:57 crc kubenswrapper[5043]: E1125 07:17:57.106438 5043 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 07:17:57 crc kubenswrapper[5043]: I1125 07:17:57.962089 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:57 crc kubenswrapper[5043]: E1125 07:17:57.962237 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:17:58 crc kubenswrapper[5043]: I1125 07:17:58.961750 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:17:58 crc kubenswrapper[5043]: I1125 07:17:58.961806 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:17:58 crc kubenswrapper[5043]: I1125 07:17:58.961750 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:17:58 crc kubenswrapper[5043]: E1125 07:17:58.961936 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:17:58 crc kubenswrapper[5043]: E1125 07:17:58.961995 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:17:58 crc kubenswrapper[5043]: E1125 07:17:58.961898 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:17:59 crc kubenswrapper[5043]: I1125 07:17:59.961745 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:17:59 crc kubenswrapper[5043]: E1125 07:17:59.962154 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:18:00 crc kubenswrapper[5043]: I1125 07:18:00.962452 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:18:00 crc kubenswrapper[5043]: I1125 07:18:00.962552 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:18:00 crc kubenswrapper[5043]: E1125 07:18:00.962584 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 07:18:00 crc kubenswrapper[5043]: E1125 07:18:00.962768 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 07:18:00 crc kubenswrapper[5043]: I1125 07:18:00.963133 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:18:00 crc kubenswrapper[5043]: E1125 07:18:00.963231 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 07:18:01 crc kubenswrapper[5043]: I1125 07:18:01.961946 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:18:01 crc kubenswrapper[5043]: E1125 07:18:01.962650 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xqj4m" podUID="e26eab68-d56e-4c83-9888-0a866e549524" Nov 25 07:18:02 crc kubenswrapper[5043]: I1125 07:18:02.962250 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:18:02 crc kubenswrapper[5043]: I1125 07:18:02.962370 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:18:02 crc kubenswrapper[5043]: I1125 07:18:02.962462 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:18:02 crc kubenswrapper[5043]: I1125 07:18:02.964882 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 07:18:02 crc kubenswrapper[5043]: I1125 07:18:02.965287 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 07:18:02 crc kubenswrapper[5043]: I1125 07:18:02.965567 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 07:18:02 crc kubenswrapper[5043]: I1125 07:18:02.967000 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 07:18:03 crc kubenswrapper[5043]: I1125 07:18:03.962434 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:18:03 crc kubenswrapper[5043]: I1125 07:18:03.966131 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 07:18:03 crc kubenswrapper[5043]: I1125 07:18:03.966094 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 07:18:04 crc kubenswrapper[5043]: I1125 07:18:04.301517 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:18:04 crc kubenswrapper[5043]: I1125 07:18:04.823000 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:18:04 crc kubenswrapper[5043]: I1125 07:18:04.832318 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:18:04 crc kubenswrapper[5043]: I1125 07:18:04.923780 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:04 crc kubenswrapper[5043]: I1125 07:18:04.923899 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:18:04 crc kubenswrapper[5043]: E1125 07:18:04.923956 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:20:06.923933033 +0000 UTC m=+271.092128774 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:04 crc kubenswrapper[5043]: I1125 07:18:04.924015 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:18:04 crc kubenswrapper[5043]: I1125 07:18:04.924042 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:18:04 crc kubenswrapper[5043]: I1125 07:18:04.926786 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:18:04 crc kubenswrapper[5043]: I1125 07:18:04.927175 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:18:04 crc kubenswrapper[5043]: I1125 07:18:04.927403 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:18:05 crc kubenswrapper[5043]: I1125 07:18:05.077456 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:18:05 crc kubenswrapper[5043]: I1125 07:18:05.084995 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 07:18:05 crc kubenswrapper[5043]: I1125 07:18:05.091175 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 07:18:05 crc kubenswrapper[5043]: W1125 07:18:05.401297 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-577b85733b4b14691326ee9ffd2387e43279d34093479289cec9f80f421c242a WatchSource:0}: Error finding container 577b85733b4b14691326ee9ffd2387e43279d34093479289cec9f80f421c242a: Status 404 returned error can't find the container with id 577b85733b4b14691326ee9ffd2387e43279d34093479289cec9f80f421c242a Nov 25 07:18:05 crc kubenswrapper[5043]: I1125 07:18:05.741940 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d8371515f6588f76c02c6f33e0ca0d984260f80d8c20128906891008ef6d2db5"} Nov 25 07:18:05 crc kubenswrapper[5043]: I1125 07:18:05.742010 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"27afaf9ae1e7f5bd2f693d5662cfce84c1aae864dd786de92b67329f73ef50f5"} Nov 25 07:18:05 crc kubenswrapper[5043]: I1125 07:18:05.744648 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"15fbe1e4398b59368a856895bafac6e81f70ec96dd09068ec387ac7906706401"} Nov 25 07:18:05 crc kubenswrapper[5043]: I1125 07:18:05.744691 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e58341188cef2df6843e43c45784b34005c77958ab09024b46b3e0f45bff6573"} Nov 25 07:18:05 crc kubenswrapper[5043]: I1125 07:18:05.744877 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:18:05 crc kubenswrapper[5043]: I1125 07:18:05.747298 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b0ec7858ad7e729ca39dbf94239950f19c2301eb7cb35acf2074ea2854f04cb5"} Nov 25 07:18:05 crc kubenswrapper[5043]: I1125 07:18:05.747344 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"577b85733b4b14691326ee9ffd2387e43279d34093479289cec9f80f421c242a"} Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.417557 5043 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.452755 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ptlq5"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.453265 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.454052 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.454583 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.455120 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wz5hw"] Nov 25 07:18:06 crc kubenswrapper[5043]: W1125 07:18:06.459506 5043 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: secrets "authentication-operator-dockercfg-mz9bj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Nov 25 07:18:06 crc kubenswrapper[5043]: E1125 07:18:06.459558 5043 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"authentication-operator-dockercfg-mz9bj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 07:18:06 crc kubenswrapper[5043]: W1125 07:18:06.459661 5043 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Nov 25 07:18:06 crc kubenswrapper[5043]: E1125 07:18:06.459678 5043 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 07:18:06 crc kubenswrapper[5043]: W1125 07:18:06.459779 5043 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: configmaps "authentication-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Nov 25 07:18:06 crc kubenswrapper[5043]: E1125 07:18:06.459794 5043 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"authentication-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 07:18:06 crc kubenswrapper[5043]: W1125 07:18:06.459849 5043 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Nov 25 07:18:06 crc kubenswrapper[5043]: E1125 07:18:06.459867 5043 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 07:18:06 crc kubenswrapper[5043]: W1125 07:18:06.459906 5043 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Nov 25 07:18:06 crc kubenswrapper[5043]: E1125 07:18:06.459920 5043 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 07:18:06 crc kubenswrapper[5043]: W1125 07:18:06.459964 5043 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: configmaps "service-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Nov 25 07:18:06 crc kubenswrapper[5043]: E1125 07:18:06.459977 5043 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"service-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 07:18:06 crc kubenswrapper[5043]: W1125 07:18:06.460017 5043 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: secrets "samples-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Nov 25 07:18:06 crc kubenswrapper[5043]: E1125 07:18:06.460033 5043 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"samples-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 07:18:06 crc kubenswrapper[5043]: W1125 07:18:06.460071 5043 reflector.go:561] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Nov 25 07:18:06 crc kubenswrapper[5043]: E1125 07:18:06.460084 5043 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 07:18:06 crc kubenswrapper[5043]: W1125 07:18:06.460122 5043 reflector.go:561] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": failed to list *v1.Secret: secrets "cluster-samples-operator-dockercfg-xpp9w" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Nov 25 07:18:06 crc kubenswrapper[5043]: E1125 07:18:06.460136 5043 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-xpp9w\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cluster-samples-operator-dockercfg-xpp9w\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 07:18:06 crc kubenswrapper[5043]: W1125 07:18:06.460176 5043 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-samples-operator": no relationship found between node 'crc' and this object Nov 25 07:18:06 crc kubenswrapper[5043]: E1125 07:18:06.460192 5043 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-samples-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 07:18:06 crc kubenswrapper[5043]: W1125 07:18:06.457566 5043 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Nov 25 07:18:06 crc kubenswrapper[5043]: E1125 07:18:06.464283 5043 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.465240 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.465793 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-66swt"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.465954 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wz5hw" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.466421 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-66swt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.466773 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.468435 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/809f70bc-86b7-4712-b396-79f602e6684d-serving-cert\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.468644 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.468812 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twglb\" (UniqueName: \"kubernetes.io/projected/881d759e-3077-4c93-b9de-86d8d960d3ca-kube-api-access-twglb\") pod \"cluster-samples-operator-665b6dd947-jrn4l\" (UID: \"881d759e-3077-4c93-b9de-86d8d960d3ca\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.468966 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-service-ca-bundle\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.469089 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/881d759e-3077-4c93-b9de-86d8d960d3ca-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jrn4l\" (UID: \"881d759e-3077-4c93-b9de-86d8d960d3ca\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.469213 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dft8v\" (UniqueName: \"kubernetes.io/projected/809f70bc-86b7-4712-b396-79f602e6684d-kube-api-access-dft8v\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.469321 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-config\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.471516 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.472795 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.473098 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.473267 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.473366 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.473514 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.473617 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.473775 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.473973 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.474120 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.474224 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.474699 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.474987 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.475099 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.475310 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.476379 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-l89zw"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.476478 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.477007 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.477438 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cchz7"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.482154 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.482578 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j7b8d"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.482892 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jxz26"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.483362 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.483660 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7c46k"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.484471 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.484546 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.484556 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cchz7" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.484692 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.484724 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.484850 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.484858 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.485112 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.485172 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.485278 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.485380 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.485570 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-lbz4p"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.485625 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.486029 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.486226 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7c46k" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.487252 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dfsn8"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.487679 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.487698 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-4q5v5"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.487814 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.488249 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4q5v5" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.489501 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.490130 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.490723 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.491989 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.492159 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.492305 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.492467 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.493811 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.494114 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.494294 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cw9mz"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.494907 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cw9mz" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.495360 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9jj8v"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.495642 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.495711 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.495763 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.495933 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.495952 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.496060 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.495953 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.496060 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.496313 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.496331 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.496746 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.497247 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ptlq5"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.499321 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.501997 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.502194 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.502352 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.502490 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.502678 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.502813 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.502955 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.503302 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.503410 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.503509 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.503631 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.503743 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.503846 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.503963 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.504057 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.504158 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.504263 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.504352 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.504533 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.504881 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.505082 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.503541 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.505290 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.503578 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.505383 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.504023 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.503967 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.505549 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.505577 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.505640 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.505691 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.505717 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.505779 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.505783 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.505916 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.506088 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.506289 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.507130 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.525055 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fhv5q"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.525098 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.525982 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.527285 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grmsk"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.529395 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grmsk" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.530884 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.532769 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.557075 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.561964 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-6b4s4"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.562545 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.562624 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.565202 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-66swt"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.565310 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cchz7"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.566625 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.566861 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.568134 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570352 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/81f790c4-a6b8-4bb7-8a46-107e7ad04689-audit\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570385 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twglb\" (UniqueName: \"kubernetes.io/projected/881d759e-3077-4c93-b9de-86d8d960d3ca-kube-api-access-twglb\") pod \"cluster-samples-operator-665b6dd947-jrn4l\" (UID: \"881d759e-3077-4c93-b9de-86d8d960d3ca\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570406 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81f790c4-a6b8-4bb7-8a46-107e7ad04689-etcd-client\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570563 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f60df734-c1b3-4b19-9655-5d64097787f7-serving-cert\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570582 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f60df734-c1b3-4b19-9655-5d64097787f7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570657 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29f8c3a9-2f22-49c3-8423-537a8e3b819d-config\") pod \"console-operator-58897d9998-7c46k\" (UID: \"29f8c3a9-2f22-49c3-8423-537a8e3b819d\") " pod="openshift-console-operator/console-operator-58897d9998-7c46k" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570674 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f790c4-a6b8-4bb7-8a46-107e7ad04689-config\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570688 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/81f790c4-a6b8-4bb7-8a46-107e7ad04689-node-pullsecrets\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570734 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81f790c4-a6b8-4bb7-8a46-107e7ad04689-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570750 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f24v\" (UniqueName: \"kubernetes.io/projected/f60df734-c1b3-4b19-9655-5d64097787f7-kube-api-access-9f24v\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570764 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzqlt\" (UniqueName: \"kubernetes.io/projected/29f8c3a9-2f22-49c3-8423-537a8e3b819d-kube-api-access-hzqlt\") pod \"console-operator-58897d9998-7c46k\" (UID: \"29f8c3a9-2f22-49c3-8423-537a8e3b819d\") " pod="openshift-console-operator/console-operator-58897d9998-7c46k" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570810 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/81f790c4-a6b8-4bb7-8a46-107e7ad04689-image-import-ca\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570827 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f60df734-c1b3-4b19-9655-5d64097787f7-etcd-client\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570842 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f60df734-c1b3-4b19-9655-5d64097787f7-encryption-config\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570902 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570952 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a3195ecd-280e-475f-a3e5-7081b0db65f3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-66swt\" (UID: \"a3195ecd-280e-475f-a3e5-7081b0db65f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-66swt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570970 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-service-ca-bundle\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.570985 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/881d759e-3077-4c93-b9de-86d8d960d3ca-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jrn4l\" (UID: \"881d759e-3077-4c93-b9de-86d8d960d3ca\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571082 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/eab6e215-cf16-47d9-9049-9f6a0ed1239a-stats-auth\") pod \"router-default-5444994796-6b4s4\" (UID: \"eab6e215-cf16-47d9-9049-9f6a0ed1239a\") " pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571099 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6d65\" (UniqueName: \"kubernetes.io/projected/ed1bbbdd-aa02-4472-867f-ef6f2c991728-kube-api-access-f6d65\") pod \"downloads-7954f5f757-4q5v5\" (UID: \"ed1bbbdd-aa02-4472-867f-ef6f2c991728\") " pod="openshift-console/downloads-7954f5f757-4q5v5" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571174 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29f8c3a9-2f22-49c3-8423-537a8e3b819d-trusted-ca\") pod \"console-operator-58897d9998-7c46k\" (UID: \"29f8c3a9-2f22-49c3-8423-537a8e3b819d\") " pod="openshift-console-operator/console-operator-58897d9998-7c46k" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571189 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f60df734-c1b3-4b19-9655-5d64097787f7-audit-policies\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571203 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f60df734-c1b3-4b19-9655-5d64097787f7-audit-dir\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571220 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dft8v\" (UniqueName: \"kubernetes.io/projected/809f70bc-86b7-4712-b396-79f602e6684d-kube-api-access-dft8v\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571237 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3195ecd-280e-475f-a3e5-7081b0db65f3-serving-cert\") pod \"openshift-config-operator-7777fb866f-66swt\" (UID: \"a3195ecd-280e-475f-a3e5-7081b0db65f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-66swt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571252 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-config\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571269 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81f790c4-a6b8-4bb7-8a46-107e7ad04689-encryption-config\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571286 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/809f70bc-86b7-4712-b396-79f602e6684d-serving-cert\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571300 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lgr8\" (UniqueName: \"kubernetes.io/projected/eab6e215-cf16-47d9-9049-9f6a0ed1239a-kube-api-access-8lgr8\") pod \"router-default-5444994796-6b4s4\" (UID: \"eab6e215-cf16-47d9-9049-9f6a0ed1239a\") " pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571316 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81f790c4-a6b8-4bb7-8a46-107e7ad04689-etcd-serving-ca\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571331 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29f8c3a9-2f22-49c3-8423-537a8e3b819d-serving-cert\") pod \"console-operator-58897d9998-7c46k\" (UID: \"29f8c3a9-2f22-49c3-8423-537a8e3b819d\") " pod="openshift-console-operator/console-operator-58897d9998-7c46k" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571346 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/eab6e215-cf16-47d9-9049-9f6a0ed1239a-default-certificate\") pod \"router-default-5444994796-6b4s4\" (UID: \"eab6e215-cf16-47d9-9049-9f6a0ed1239a\") " pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571359 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eab6e215-cf16-47d9-9049-9f6a0ed1239a-metrics-certs\") pod \"router-default-5444994796-6b4s4\" (UID: \"eab6e215-cf16-47d9-9049-9f6a0ed1239a\") " pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571403 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f790c4-a6b8-4bb7-8a46-107e7ad04689-serving-cert\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571420 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhpwf\" (UniqueName: \"kubernetes.io/projected/a3195ecd-280e-475f-a3e5-7081b0db65f3-kube-api-access-dhpwf\") pod \"openshift-config-operator-7777fb866f-66swt\" (UID: \"a3195ecd-280e-475f-a3e5-7081b0db65f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-66swt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571435 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsdfx\" (UniqueName: \"kubernetes.io/projected/81f790c4-a6b8-4bb7-8a46-107e7ad04689-kube-api-access-nsdfx\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571457 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f60df734-c1b3-4b19-9655-5d64097787f7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571472 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eab6e215-cf16-47d9-9049-9f6a0ed1239a-service-ca-bundle\") pod \"router-default-5444994796-6b4s4\" (UID: \"eab6e215-cf16-47d9-9049-9f6a0ed1239a\") " pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.571485 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81f790c4-a6b8-4bb7-8a46-107e7ad04689-audit-dir\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.572053 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.572493 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.574871 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.575008 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.575699 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wz5hw"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.575724 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd5w5"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.578463 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd5w5" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.579833 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.583850 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.584283 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.588116 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.588307 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-x6762"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.589432 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7mf86"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.589883 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7mf86" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.590180 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnr9c"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.590205 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x6762" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.590564 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnr9c" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.591236 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lt62z"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.592064 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lt62z" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.595881 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.596774 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.596932 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.600207 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.600966 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kglrl"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.608686 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.618434 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.624575 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.625357 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8w2c6"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.626117 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8w2c6" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.626364 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kglrl" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.626533 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.632122 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.633318 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.637934 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pwrdz"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.638959 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pwrdz" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.640183 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.640451 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.641855 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.655804 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hqtnq"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.657548 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.657564 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.657866 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.658335 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.661440 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-b6j49"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.662642 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.664082 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9bx8l"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.665735 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bx8l" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.666763 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.668072 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-lbz4p"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.669317 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cw9mz"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.670860 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grmsk"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.671878 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lgr8\" (UniqueName: \"kubernetes.io/projected/eab6e215-cf16-47d9-9049-9f6a0ed1239a-kube-api-access-8lgr8\") pod \"router-default-5444994796-6b4s4\" (UID: \"eab6e215-cf16-47d9-9049-9f6a0ed1239a\") " pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.671906 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81f790c4-a6b8-4bb7-8a46-107e7ad04689-etcd-serving-ca\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.671925 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81f790c4-a6b8-4bb7-8a46-107e7ad04689-encryption-config\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.671942 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29f8c3a9-2f22-49c3-8423-537a8e3b819d-serving-cert\") pod \"console-operator-58897d9998-7c46k\" (UID: \"29f8c3a9-2f22-49c3-8423-537a8e3b819d\") " pod="openshift-console-operator/console-operator-58897d9998-7c46k" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.671960 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/eab6e215-cf16-47d9-9049-9f6a0ed1239a-default-certificate\") pod \"router-default-5444994796-6b4s4\" (UID: \"eab6e215-cf16-47d9-9049-9f6a0ed1239a\") " pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.671974 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eab6e215-cf16-47d9-9049-9f6a0ed1239a-metrics-certs\") pod \"router-default-5444994796-6b4s4\" (UID: \"eab6e215-cf16-47d9-9049-9f6a0ed1239a\") " pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.671990 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f790c4-a6b8-4bb7-8a46-107e7ad04689-serving-cert\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672007 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhpwf\" (UniqueName: \"kubernetes.io/projected/a3195ecd-280e-475f-a3e5-7081b0db65f3-kube-api-access-dhpwf\") pod \"openshift-config-operator-7777fb866f-66swt\" (UID: \"a3195ecd-280e-475f-a3e5-7081b0db65f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-66swt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672024 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsdfx\" (UniqueName: \"kubernetes.io/projected/81f790c4-a6b8-4bb7-8a46-107e7ad04689-kube-api-access-nsdfx\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672038 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f60df734-c1b3-4b19-9655-5d64097787f7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672053 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eab6e215-cf16-47d9-9049-9f6a0ed1239a-service-ca-bundle\") pod \"router-default-5444994796-6b4s4\" (UID: \"eab6e215-cf16-47d9-9049-9f6a0ed1239a\") " pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672067 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81f790c4-a6b8-4bb7-8a46-107e7ad04689-audit-dir\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672082 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/81f790c4-a6b8-4bb7-8a46-107e7ad04689-audit\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672106 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81f790c4-a6b8-4bb7-8a46-107e7ad04689-etcd-client\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672119 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f60df734-c1b3-4b19-9655-5d64097787f7-serving-cert\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672133 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f60df734-c1b3-4b19-9655-5d64097787f7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672147 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29f8c3a9-2f22-49c3-8423-537a8e3b819d-config\") pod \"console-operator-58897d9998-7c46k\" (UID: \"29f8c3a9-2f22-49c3-8423-537a8e3b819d\") " pod="openshift-console-operator/console-operator-58897d9998-7c46k" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672161 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f790c4-a6b8-4bb7-8a46-107e7ad04689-config\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672175 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/81f790c4-a6b8-4bb7-8a46-107e7ad04689-node-pullsecrets\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672188 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81f790c4-a6b8-4bb7-8a46-107e7ad04689-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672204 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f24v\" (UniqueName: \"kubernetes.io/projected/f60df734-c1b3-4b19-9655-5d64097787f7-kube-api-access-9f24v\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672220 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzqlt\" (UniqueName: \"kubernetes.io/projected/29f8c3a9-2f22-49c3-8423-537a8e3b819d-kube-api-access-hzqlt\") pod \"console-operator-58897d9998-7c46k\" (UID: \"29f8c3a9-2f22-49c3-8423-537a8e3b819d\") " pod="openshift-console-operator/console-operator-58897d9998-7c46k" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672236 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/81f790c4-a6b8-4bb7-8a46-107e7ad04689-image-import-ca\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672251 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f60df734-c1b3-4b19-9655-5d64097787f7-etcd-client\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672266 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f60df734-c1b3-4b19-9655-5d64097787f7-encryption-config\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672296 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a3195ecd-280e-475f-a3e5-7081b0db65f3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-66swt\" (UID: \"a3195ecd-280e-475f-a3e5-7081b0db65f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-66swt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672320 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/eab6e215-cf16-47d9-9049-9f6a0ed1239a-stats-auth\") pod \"router-default-5444994796-6b4s4\" (UID: \"eab6e215-cf16-47d9-9049-9f6a0ed1239a\") " pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672336 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29f8c3a9-2f22-49c3-8423-537a8e3b819d-trusted-ca\") pod \"console-operator-58897d9998-7c46k\" (UID: \"29f8c3a9-2f22-49c3-8423-537a8e3b819d\") " pod="openshift-console-operator/console-operator-58897d9998-7c46k" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672350 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6d65\" (UniqueName: \"kubernetes.io/projected/ed1bbbdd-aa02-4472-867f-ef6f2c991728-kube-api-access-f6d65\") pod \"downloads-7954f5f757-4q5v5\" (UID: \"ed1bbbdd-aa02-4472-867f-ef6f2c991728\") " pod="openshift-console/downloads-7954f5f757-4q5v5" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672365 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f60df734-c1b3-4b19-9655-5d64097787f7-audit-policies\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672379 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f60df734-c1b3-4b19-9655-5d64097787f7-audit-dir\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672400 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3195ecd-280e-475f-a3e5-7081b0db65f3-serving-cert\") pod \"openshift-config-operator-7777fb866f-66swt\" (UID: \"a3195ecd-280e-475f-a3e5-7081b0db65f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-66swt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672770 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd5w5"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.672792 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.673418 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f790c4-a6b8-4bb7-8a46-107e7ad04689-config\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.673476 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/81f790c4-a6b8-4bb7-8a46-107e7ad04689-node-pullsecrets\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.674094 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29f8c3a9-2f22-49c3-8423-537a8e3b819d-config\") pod \"console-operator-58897d9998-7c46k\" (UID: \"29f8c3a9-2f22-49c3-8423-537a8e3b819d\") " pod="openshift-console-operator/console-operator-58897d9998-7c46k" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.674163 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81f790c4-a6b8-4bb7-8a46-107e7ad04689-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.674861 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81f790c4-a6b8-4bb7-8a46-107e7ad04689-etcd-serving-ca\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.674873 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/81f790c4-a6b8-4bb7-8a46-107e7ad04689-image-import-ca\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.674892 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29f8c3a9-2f22-49c3-8423-537a8e3b819d-trusted-ca\") pod \"console-operator-58897d9998-7c46k\" (UID: \"29f8c3a9-2f22-49c3-8423-537a8e3b819d\") " pod="openshift-console-operator/console-operator-58897d9998-7c46k" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.674949 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f60df734-c1b3-4b19-9655-5d64097787f7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.675054 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81f790c4-a6b8-4bb7-8a46-107e7ad04689-audit-dir\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.675305 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f60df734-c1b3-4b19-9655-5d64097787f7-audit-policies\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.675343 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f60df734-c1b3-4b19-9655-5d64097787f7-audit-dir\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.675687 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/81f790c4-a6b8-4bb7-8a46-107e7ad04689-audit\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.676500 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j7b8d"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.676833 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a3195ecd-280e-475f-a3e5-7081b0db65f3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-66swt\" (UID: \"a3195ecd-280e-475f-a3e5-7081b0db65f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-66swt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.677129 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f60df734-c1b3-4b19-9655-5d64097787f7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.677138 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.678355 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3195ecd-280e-475f-a3e5-7081b0db65f3-serving-cert\") pod \"openshift-config-operator-7777fb866f-66swt\" (UID: \"a3195ecd-280e-475f-a3e5-7081b0db65f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-66swt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.678764 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81f790c4-a6b8-4bb7-8a46-107e7ad04689-encryption-config\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.678793 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81f790c4-a6b8-4bb7-8a46-107e7ad04689-etcd-client\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.679112 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.679573 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f60df734-c1b3-4b19-9655-5d64097787f7-etcd-client\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.679913 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.680055 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f60df734-c1b3-4b19-9655-5d64097787f7-encryption-config\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.681317 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-l89zw"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.681769 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29f8c3a9-2f22-49c3-8423-537a8e3b819d-serving-cert\") pod \"console-operator-58897d9998-7c46k\" (UID: \"29f8c3a9-2f22-49c3-8423-537a8e3b819d\") " pod="openshift-console-operator/console-operator-58897d9998-7c46k" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.682980 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4q5v5"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.683930 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dfsn8"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.683944 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f790c4-a6b8-4bb7-8a46-107e7ad04689-serving-cert\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.684472 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f60df734-c1b3-4b19-9655-5d64097787f7-serving-cert\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.685339 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.686586 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnr9c"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.687922 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7c46k"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.689270 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.690793 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.692145 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9jj8v"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.693585 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7mf86"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.694931 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.695082 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jxz26"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.697265 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-x6762"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.698454 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.699426 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fhv5q"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.700525 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lt62z"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.701540 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.702694 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hqtnq"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.703631 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.704616 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-n7xt5"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.705554 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-n7xt5" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.706020 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-b6j49"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.707130 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-q4h7x"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.707705 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q4h7x" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.709349 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8w2c6"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.710925 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9bx8l"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.712177 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kglrl"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.713383 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pwrdz"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.714671 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q4h7x"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.715403 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.716046 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-n7xt5"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.717321 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-rp8sr"] Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.718096 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rp8sr" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.735813 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.755006 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.776396 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.801294 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.815019 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.835221 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.856437 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.875506 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.896056 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.915788 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.935376 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.955372 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.976275 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 07:18:06 crc kubenswrapper[5043]: I1125 07:18:06.994752 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.015146 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.035162 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.055868 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.075419 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.087210 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/eab6e215-cf16-47d9-9049-9f6a0ed1239a-stats-auth\") pod \"router-default-5444994796-6b4s4\" (UID: \"eab6e215-cf16-47d9-9049-9f6a0ed1239a\") " pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.095563 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.115920 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.120219 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/eab6e215-cf16-47d9-9049-9f6a0ed1239a-default-certificate\") pod \"router-default-5444994796-6b4s4\" (UID: \"eab6e215-cf16-47d9-9049-9f6a0ed1239a\") " pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.136296 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.156229 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.166824 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eab6e215-cf16-47d9-9049-9f6a0ed1239a-service-ca-bundle\") pod \"router-default-5444994796-6b4s4\" (UID: \"eab6e215-cf16-47d9-9049-9f6a0ed1239a\") " pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.176118 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.200016 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.216265 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eab6e215-cf16-47d9-9049-9f6a0ed1239a-metrics-certs\") pod \"router-default-5444994796-6b4s4\" (UID: \"eab6e215-cf16-47d9-9049-9f6a0ed1239a\") " pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.296092 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.315421 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.335462 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.354578 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.376055 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.395366 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.416027 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.435531 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.455926 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.475457 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.495783 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.515875 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.535520 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.558313 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 07:18:07 crc kubenswrapper[5043]: E1125 07:18:07.571149 5043 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 25 07:18:07 crc kubenswrapper[5043]: E1125 07:18:07.571186 5043 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 25 07:18:07 crc kubenswrapper[5043]: E1125 07:18:07.571228 5043 secret.go:188] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Nov 25 07:18:07 crc kubenswrapper[5043]: E1125 07:18:07.571237 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-trusted-ca-bundle podName:809f70bc-86b7-4712-b396-79f602e6684d nodeName:}" failed. No retries permitted until 2025-11-25 07:18:08.071216921 +0000 UTC m=+152.239412652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-trusted-ca-bundle") pod "authentication-operator-69f744f599-ptlq5" (UID: "809f70bc-86b7-4712-b396-79f602e6684d") : failed to sync configmap cache: timed out waiting for the condition Nov 25 07:18:07 crc kubenswrapper[5043]: E1125 07:18:07.571361 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-service-ca-bundle podName:809f70bc-86b7-4712-b396-79f602e6684d nodeName:}" failed. No retries permitted until 2025-11-25 07:18:08.071337964 +0000 UTC m=+152.239533725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-service-ca-bundle") pod "authentication-operator-69f744f599-ptlq5" (UID: "809f70bc-86b7-4712-b396-79f602e6684d") : failed to sync configmap cache: timed out waiting for the condition Nov 25 07:18:07 crc kubenswrapper[5043]: E1125 07:18:07.571387 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/881d759e-3077-4c93-b9de-86d8d960d3ca-samples-operator-tls podName:881d759e-3077-4c93-b9de-86d8d960d3ca nodeName:}" failed. No retries permitted until 2025-11-25 07:18:08.071374215 +0000 UTC m=+152.239569976 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/881d759e-3077-4c93-b9de-86d8d960d3ca-samples-operator-tls") pod "cluster-samples-operator-665b6dd947-jrn4l" (UID: "881d759e-3077-4c93-b9de-86d8d960d3ca") : failed to sync secret cache: timed out waiting for the condition Nov 25 07:18:07 crc kubenswrapper[5043]: E1125 07:18:07.572475 5043 secret.go:188] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 25 07:18:07 crc kubenswrapper[5043]: E1125 07:18:07.572536 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/809f70bc-86b7-4712-b396-79f602e6684d-serving-cert podName:809f70bc-86b7-4712-b396-79f602e6684d nodeName:}" failed. No retries permitted until 2025-11-25 07:18:08.072520425 +0000 UTC m=+152.240716156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/809f70bc-86b7-4712-b396-79f602e6684d-serving-cert") pod "authentication-operator-69f744f599-ptlq5" (UID: "809f70bc-86b7-4712-b396-79f602e6684d") : failed to sync secret cache: timed out waiting for the condition Nov 25 07:18:07 crc kubenswrapper[5043]: E1125 07:18:07.572534 5043 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Nov 25 07:18:07 crc kubenswrapper[5043]: E1125 07:18:07.572595 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-config podName:809f70bc-86b7-4712-b396-79f602e6684d nodeName:}" failed. No retries permitted until 2025-11-25 07:18:08.072578567 +0000 UTC m=+152.240774298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-config") pod "authentication-operator-69f744f599-ptlq5" (UID: "809f70bc-86b7-4712-b396-79f602e6684d") : failed to sync configmap cache: timed out waiting for the condition Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.575217 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.594346 5043 request.go:700] Waited for 1.001770792s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&limit=500&resourceVersion=0 Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.595948 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.615967 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.635702 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.655746 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.675757 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.695800 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.715887 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.735416 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.755193 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.775902 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.795594 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.816126 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.836308 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.855752 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.881258 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.896589 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.915242 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.935184 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.956286 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.976046 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 07:18:07 crc kubenswrapper[5043]: I1125 07:18:07.995169 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.016809 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.036318 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.056420 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.075984 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.090748 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.090818 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-service-ca-bundle\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.090844 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/881d759e-3077-4c93-b9de-86d8d960d3ca-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jrn4l\" (UID: \"881d759e-3077-4c93-b9de-86d8d960d3ca\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.090888 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-config\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.090922 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/809f70bc-86b7-4712-b396-79f602e6684d-serving-cert\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.096562 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.116725 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.141198 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.156427 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.175912 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.195598 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.216822 5043 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.236483 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.255897 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 07:18:08 crc kubenswrapper[5043]: E1125 07:18:08.266901 5043 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.275612 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 07:18:08 crc kubenswrapper[5043]: E1125 07:18:08.289470 5043 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.295973 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.315896 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.336182 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.373289 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lgr8\" (UniqueName: \"kubernetes.io/projected/eab6e215-cf16-47d9-9049-9f6a0ed1239a-kube-api-access-8lgr8\") pod \"router-default-5444994796-6b4s4\" (UID: \"eab6e215-cf16-47d9-9049-9f6a0ed1239a\") " pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.391138 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f24v\" (UniqueName: \"kubernetes.io/projected/f60df734-c1b3-4b19-9655-5d64097787f7-kube-api-access-9f24v\") pod \"apiserver-7bbb656c7d-mbwfq\" (UID: \"f60df734-c1b3-4b19-9655-5d64097787f7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.410097 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzqlt\" (UniqueName: \"kubernetes.io/projected/29f8c3a9-2f22-49c3-8423-537a8e3b819d-kube-api-access-hzqlt\") pod \"console-operator-58897d9998-7c46k\" (UID: \"29f8c3a9-2f22-49c3-8423-537a8e3b819d\") " pod="openshift-console-operator/console-operator-58897d9998-7c46k" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.430918 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsdfx\" (UniqueName: \"kubernetes.io/projected/81f790c4-a6b8-4bb7-8a46-107e7ad04689-kube-api-access-nsdfx\") pod \"apiserver-76f77b778f-jxz26\" (UID: \"81f790c4-a6b8-4bb7-8a46-107e7ad04689\") " pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.452822 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6d65\" (UniqueName: \"kubernetes.io/projected/ed1bbbdd-aa02-4472-867f-ef6f2c991728-kube-api-access-f6d65\") pod \"downloads-7954f5f757-4q5v5\" (UID: \"ed1bbbdd-aa02-4472-867f-ef6f2c991728\") " pod="openshift-console/downloads-7954f5f757-4q5v5" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.454488 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7c46k" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.472948 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhpwf\" (UniqueName: \"kubernetes.io/projected/a3195ecd-280e-475f-a3e5-7081b0db65f3-kube-api-access-dhpwf\") pod \"openshift-config-operator-7777fb866f-66swt\" (UID: \"a3195ecd-280e-475f-a3e5-7081b0db65f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-66swt" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.476381 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.496164 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.508495 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4q5v5" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.515652 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.536484 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.557200 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.557618 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.576148 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 07:18:08 crc kubenswrapper[5043]: W1125 07:18:08.584970 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeab6e215_cf16_47d9_9049_9f6a0ed1239a.slice/crio-f51ec0778ed17d99a92f1f7ecc48337962affc7c8f907882f666f131c92aea33 WatchSource:0}: Error finding container f51ec0778ed17d99a92f1f7ecc48337962affc7c8f907882f666f131c92aea33: Status 404 returned error can't find the container with id f51ec0778ed17d99a92f1f7ecc48337962affc7c8f907882f666f131c92aea33 Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.597766 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.613817 5043 request.go:700] Waited for 1.895413065s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0 Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.615381 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.616778 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7c46k"] Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.625108 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-66swt" Nov 25 07:18:08 crc kubenswrapper[5043]: W1125 07:18:08.630816 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29f8c3a9_2f22_49c3_8423_537a8e3b819d.slice/crio-c811396efdd68b262e3ae9656e4475c7326fc6c9821bcb48e488d0ce5843c5e0 WatchSource:0}: Error finding container c811396efdd68b262e3ae9656e4475c7326fc6c9821bcb48e488d0ce5843c5e0: Status 404 returned error can't find the container with id c811396efdd68b262e3ae9656e4475c7326fc6c9821bcb48e488d0ce5843c5e0 Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.635487 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.657051 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.657400 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.685241 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4q5v5"] Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.701789 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/56d9ce8c-65f4-4482-860f-a7009c96e356-registry-certificates\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.703996 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-registry-tls\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.704046 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.704076 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56d9ce8c-65f4-4482-860f-a7009c96e356-trusted-ca\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.704125 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/56d9ce8c-65f4-4482-860f-a7009c96e356-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: E1125 07:18:08.706428 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:09.206411675 +0000 UTC m=+153.374607396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.716118 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.716334 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.722447 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-config\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.738780 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.757956 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.759164 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4q5v5" event={"ID":"ed1bbbdd-aa02-4472-867f-ef6f2c991728","Type":"ContainerStarted","Data":"52a5f9268709c90eb2e43b96e33a54ee5b125323e22dd97f42901354965314bf"} Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.761095 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7c46k" event={"ID":"29f8c3a9-2f22-49c3-8423-537a8e3b819d","Type":"ContainerStarted","Data":"c811396efdd68b262e3ae9656e4475c7326fc6c9821bcb48e488d0ce5843c5e0"} Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.764051 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-6b4s4" event={"ID":"eab6e215-cf16-47d9-9049-9f6a0ed1239a","Type":"ContainerStarted","Data":"f51ec0778ed17d99a92f1f7ecc48337962affc7c8f907882f666f131c92aea33"} Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.781526 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.789531 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/809f70bc-86b7-4712-b396-79f602e6684d-serving-cert\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.792075 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-66swt"] Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.796262 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 07:18:08 crc kubenswrapper[5043]: E1125 07:18:08.798814 5043 projected.go:194] Error preparing data for projected volume kube-api-access-twglb for pod openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l: failed to sync configmap cache: timed out waiting for the condition Nov 25 07:18:08 crc kubenswrapper[5043]: E1125 07:18:08.798900 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/881d759e-3077-4c93-b9de-86d8d960d3ca-kube-api-access-twglb podName:881d759e-3077-4c93-b9de-86d8d960d3ca nodeName:}" failed. No retries permitted until 2025-11-25 07:18:09.298872181 +0000 UTC m=+153.467067902 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-twglb" (UniqueName: "kubernetes.io/projected/881d759e-3077-4c93-b9de-86d8d960d3ca-kube-api-access-twglb") pod "cluster-samples-operator-665b6dd947-jrn4l" (UID: "881d759e-3077-4c93-b9de-86d8d960d3ca") : failed to sync configmap cache: timed out waiting for the condition Nov 25 07:18:08 crc kubenswrapper[5043]: W1125 07:18:08.802559 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3195ecd_280e_475f_a3e5_7081b0db65f3.slice/crio-8e9834e0e4de57219949c9118154bd8e87d5ec79815c2ed5b257d75f334a6c41 WatchSource:0}: Error finding container 8e9834e0e4de57219949c9118154bd8e87d5ec79815c2ed5b257d75f334a6c41: Status 404 returned error can't find the container with id 8e9834e0e4de57219949c9118154bd8e87d5ec79815c2ed5b257d75f334a6c41 Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.804682 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.804826 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e72e27f3-de8a-4763-a3ee-cf8ea1531909-config\") pod \"kube-controller-manager-operator-78b949d7b-grmsk\" (UID: \"e72e27f3-de8a-4763-a3ee-cf8ea1531909\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grmsk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.804855 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.804876 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f75a6197-9de8-4720-af31-ebc12fe35e48-client-ca\") pod \"route-controller-manager-6576b87f9c-hpq4k\" (UID: \"f75a6197-9de8-4720-af31-ebc12fe35e48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.804897 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bece8729-924c-4595-88ee-ddcb1873b643-metrics-tls\") pod \"ingress-operator-5b745b69d9-7zfvk\" (UID: \"bece8729-924c-4595-88ee-ddcb1873b643\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.804917 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.804936 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxgxl\" (UniqueName: \"kubernetes.io/projected/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-kube-api-access-fxgxl\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.804955 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj9z5\" (UniqueName: \"kubernetes.io/projected/f75a6197-9de8-4720-af31-ebc12fe35e48-kube-api-access-wj9z5\") pod \"route-controller-manager-6576b87f9c-hpq4k\" (UID: \"f75a6197-9de8-4720-af31-ebc12fe35e48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.804985 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/56d9ce8c-65f4-4482-860f-a7009c96e356-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.805006 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qdzb\" (UniqueName: \"kubernetes.io/projected/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-kube-api-access-5qdzb\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.805029 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.805060 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/060543b5-b830-412a-916a-0456db20f1ca-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cchz7\" (UID: \"060543b5-b830-412a-916a-0456db20f1ca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cchz7" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.805082 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b80cda5-b011-4b59-869f-67ed66fe1a5a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8hqj6\" (UID: \"2b80cda5-b011-4b59-869f-67ed66fe1a5a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.805099 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-audit-dir\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.805116 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.805134 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d00aa552-e700-4bec-9818-3084ac601a92-config\") pod \"machine-api-operator-5694c8668f-l89zw\" (UID: \"d00aa552-e700-4bec-9818-3084ac601a92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.805149 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-trusted-ca-bundle\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.805168 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-client-ca\") pod \"controller-manager-879f6c89f-dfsn8\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.805185 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bece8729-924c-4595-88ee-ddcb1873b643-trusted-ca\") pod \"ingress-operator-5b745b69d9-7zfvk\" (UID: \"bece8729-924c-4595-88ee-ddcb1873b643\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.805203 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-serving-cert\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:08 crc kubenswrapper[5043]: E1125 07:18:08.806314 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:09.306286546 +0000 UTC m=+153.474482267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.806355 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-729pp\" (UniqueName: \"kubernetes.io/projected/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-kube-api-access-729pp\") pod \"controller-manager-879f6c89f-dfsn8\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.806386 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6qjg\" (UniqueName: \"kubernetes.io/projected/ab8cbc1d-68e1-40c7-a280-4852974cf941-kube-api-access-j6qjg\") pod \"machine-approver-56656f9798-5kkx6\" (UID: \"ab8cbc1d-68e1-40c7-a280-4852974cf941\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.806415 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-etcd-ca\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.806495 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97sch\" (UniqueName: \"kubernetes.io/projected/b18ece39-f2f5-41f9-b2e1-79f9f880791b-kube-api-access-97sch\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.806545 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/56d9ce8c-65f4-4482-860f-a7009c96e356-registry-certificates\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.806585 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-etcd-service-ca\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.806927 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/56d9ce8c-65f4-4482-860f-a7009c96e356-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807017 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dfsn8\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807048 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ab8cbc1d-68e1-40c7-a280-4852974cf941-auth-proxy-config\") pod \"machine-approver-56656f9798-5kkx6\" (UID: \"ab8cbc1d-68e1-40c7-a280-4852974cf941\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807085 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-audit-policies\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807112 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807133 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/060543b5-b830-412a-916a-0456db20f1ca-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cchz7\" (UID: \"060543b5-b830-412a-916a-0456db20f1ca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cchz7" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807179 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e0fe8fc-205b-4d60-8849-f624e26034ab-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cw9mz\" (UID: \"9e0fe8fc-205b-4d60-8849-f624e26034ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cw9mz" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807203 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-config\") pod \"controller-manager-879f6c89f-dfsn8\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807227 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk98p\" (UniqueName: \"kubernetes.io/projected/060543b5-b830-412a-916a-0456db20f1ca-kube-api-access-xk98p\") pod \"openshift-controller-manager-operator-756b6f6bc6-cchz7\" (UID: \"060543b5-b830-412a-916a-0456db20f1ca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cchz7" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807248 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b80cda5-b011-4b59-869f-67ed66fe1a5a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8hqj6\" (UID: \"2b80cda5-b011-4b59-869f-67ed66fe1a5a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807288 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/56d9ce8c-65f4-4482-860f-a7009c96e356-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807312 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ssxl\" (UniqueName: \"kubernetes.io/projected/bece8729-924c-4595-88ee-ddcb1873b643-kube-api-access-8ssxl\") pod \"ingress-operator-5b745b69d9-7zfvk\" (UID: \"bece8729-924c-4595-88ee-ddcb1873b643\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807348 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e72e27f3-de8a-4763-a3ee-cf8ea1531909-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-grmsk\" (UID: \"e72e27f3-de8a-4763-a3ee-cf8ea1531909\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grmsk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807373 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-serving-cert\") pod \"controller-manager-879f6c89f-dfsn8\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807395 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc0571cc-8090-4490-af8a-8199e9f983a9-metrics-tls\") pod \"dns-operator-744455d44c-wz5hw\" (UID: \"dc0571cc-8090-4490-af8a-8199e9f983a9\") " pod="openshift-dns-operator/dns-operator-744455d44c-wz5hw" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807417 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d00aa552-e700-4bec-9818-3084ac601a92-images\") pod \"machine-api-operator-5694c8668f-l89zw\" (UID: \"d00aa552-e700-4bec-9818-3084ac601a92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807437 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e72e27f3-de8a-4763-a3ee-cf8ea1531909-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-grmsk\" (UID: \"e72e27f3-de8a-4763-a3ee-cf8ea1531909\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grmsk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807461 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-oauth-serving-cert\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807482 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-etcd-client\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807503 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807525 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807556 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jt99\" (UniqueName: \"kubernetes.io/projected/dc0571cc-8090-4490-af8a-8199e9f983a9-kube-api-access-5jt99\") pod \"dns-operator-744455d44c-wz5hw\" (UID: \"dc0571cc-8090-4490-af8a-8199e9f983a9\") " pod="openshift-dns-operator/dns-operator-744455d44c-wz5hw" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807579 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-config\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807754 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/56d9ce8c-65f4-4482-860f-a7009c96e356-registry-certificates\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.807775 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e0fe8fc-205b-4d60-8849-f624e26034ab-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cw9mz\" (UID: \"9e0fe8fc-205b-4d60-8849-f624e26034ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cw9mz" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.808027 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.808240 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f75a6197-9de8-4720-af31-ebc12fe35e48-serving-cert\") pod \"route-controller-manager-6576b87f9c-hpq4k\" (UID: \"f75a6197-9de8-4720-af31-ebc12fe35e48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.808269 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-oauth-config\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.808295 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdp74\" (UniqueName: \"kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-kube-api-access-bdp74\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.808319 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-config\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.808353 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2b80cda5-b011-4b59-869f-67ed66fe1a5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8hqj6\" (UID: \"2b80cda5-b011-4b59-869f-67ed66fe1a5a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.808378 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f75a6197-9de8-4720-af31-ebc12fe35e48-config\") pod \"route-controller-manager-6576b87f9c-hpq4k\" (UID: \"f75a6197-9de8-4720-af31-ebc12fe35e48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.808404 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnrqw\" (UniqueName: \"kubernetes.io/projected/2b80cda5-b011-4b59-869f-67ed66fe1a5a-kube-api-access-cnrqw\") pod \"cluster-image-registry-operator-dc59b4c8b-8hqj6\" (UID: \"2b80cda5-b011-4b59-869f-67ed66fe1a5a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.808432 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-registry-tls\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.808459 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf8n6\" (UniqueName: \"kubernetes.io/projected/d00aa552-e700-4bec-9818-3084ac601a92-kube-api-access-tf8n6\") pod \"machine-api-operator-5694c8668f-l89zw\" (UID: \"d00aa552-e700-4bec-9818-3084ac601a92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.808871 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ab8cbc1d-68e1-40c7-a280-4852974cf941-machine-approver-tls\") pod \"machine-approver-56656f9798-5kkx6\" (UID: \"ab8cbc1d-68e1-40c7-a280-4852974cf941\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.808970 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d00aa552-e700-4bec-9818-3084ac601a92-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-l89zw\" (UID: \"d00aa552-e700-4bec-9818-3084ac601a92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.809019 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bece8729-924c-4595-88ee-ddcb1873b643-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7zfvk\" (UID: \"bece8729-924c-4595-88ee-ddcb1873b643\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.809428 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.809493 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56d9ce8c-65f4-4482-860f-a7009c96e356-trusted-ca\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.809535 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-serving-cert\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.809750 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: E1125 07:18:08.809998 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:09.309984743 +0000 UTC m=+153.478180464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.810446 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-bound-sa-token\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.810513 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56d9ce8c-65f4-4482-860f-a7009c96e356-trusted-ca\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.810524 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.810622 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.810726 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab8cbc1d-68e1-40c7-a280-4852974cf941-config\") pod \"machine-approver-56656f9798-5kkx6\" (UID: \"ab8cbc1d-68e1-40c7-a280-4852974cf941\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.810755 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-service-ca\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.810776 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88jlv\" (UniqueName: \"kubernetes.io/projected/9e0fe8fc-205b-4d60-8849-f624e26034ab-kube-api-access-88jlv\") pod \"openshift-apiserver-operator-796bbdcf4f-cw9mz\" (UID: \"9e0fe8fc-205b-4d60-8849-f624e26034ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cw9mz" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.812764 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-registry-tls\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.820173 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.825249 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-service-ca-bundle\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.836421 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 07:18:08 crc kubenswrapper[5043]: E1125 07:18:08.840219 5043 projected.go:194] Error preparing data for projected volume kube-api-access-dft8v for pod openshift-authentication-operator/authentication-operator-69f744f599-ptlq5: failed to sync configmap cache: timed out waiting for the condition Nov 25 07:18:08 crc kubenswrapper[5043]: E1125 07:18:08.840297 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/809f70bc-86b7-4712-b396-79f602e6684d-kube-api-access-dft8v podName:809f70bc-86b7-4712-b396-79f602e6684d nodeName:}" failed. No retries permitted until 2025-11-25 07:18:09.340275997 +0000 UTC m=+153.508471718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dft8v" (UniqueName: "kubernetes.io/projected/809f70bc-86b7-4712-b396-79f602e6684d-kube-api-access-dft8v") pod "authentication-operator-69f744f599-ptlq5" (UID: "809f70bc-86b7-4712-b396-79f602e6684d") : failed to sync configmap cache: timed out waiting for the condition Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.864971 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.871927 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/809f70bc-86b7-4712-b396-79f602e6684d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.876170 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.887355 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/881d759e-3077-4c93-b9de-86d8d960d3ca-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jrn4l\" (UID: \"881d759e-3077-4c93-b9de-86d8d960d3ca\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.891508 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq"] Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.895228 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.911411 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:08 crc kubenswrapper[5043]: E1125 07:18:08.911574 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:09.411556278 +0000 UTC m=+153.579751999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.911686 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e72e27f3-de8a-4763-a3ee-cf8ea1531909-config\") pod \"kube-controller-manager-operator-78b949d7b-grmsk\" (UID: \"e72e27f3-de8a-4763-a3ee-cf8ea1531909\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grmsk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.911717 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.911746 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3a13dff-3c0c-4151-9514-42c40e8bc83f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hqtnq\" (UID: \"f3a13dff-3c0c-4151-9514-42c40e8bc83f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.911797 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fc143568-8b91-44a5-9fff-5890b6c29e0c-certs\") pod \"machine-config-server-rp8sr\" (UID: \"fc143568-8b91-44a5-9fff-5890b6c29e0c\") " pod="openshift-machine-config-operator/machine-config-server-rp8sr" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.911822 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f75a6197-9de8-4720-af31-ebc12fe35e48-client-ca\") pod \"route-controller-manager-6576b87f9c-hpq4k\" (UID: \"f75a6197-9de8-4720-af31-ebc12fe35e48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.911847 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bece8729-924c-4595-88ee-ddcb1873b643-metrics-tls\") pod \"ingress-operator-5b745b69d9-7zfvk\" (UID: \"bece8729-924c-4595-88ee-ddcb1873b643\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.911868 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj9z5\" (UniqueName: \"kubernetes.io/projected/f75a6197-9de8-4720-af31-ebc12fe35e48-kube-api-access-wj9z5\") pod \"route-controller-manager-6576b87f9c-hpq4k\" (UID: \"f75a6197-9de8-4720-af31-ebc12fe35e48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.911890 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5d6fe950-18fc-440b-ad82-014a34669117-profile-collector-cert\") pod \"catalog-operator-68c6474976-9rfgk\" (UID: \"5d6fe950-18fc-440b-ad82-014a34669117\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.911915 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b4caa8-cc12-4739-8e10-d88cd9d4137d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lt62z\" (UID: \"c3b4caa8-cc12-4739-8e10-d88cd9d4137d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lt62z" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.911950 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/060543b5-b830-412a-916a-0456db20f1ca-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cchz7\" (UID: \"060543b5-b830-412a-916a-0456db20f1ca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cchz7" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.911972 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b80cda5-b011-4b59-869f-67ed66fe1a5a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8hqj6\" (UID: \"2b80cda5-b011-4b59-869f-67ed66fe1a5a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912001 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-audit-dir\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912023 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6kxh\" (UniqueName: \"kubernetes.io/projected/4685134b-7bce-4398-8492-74670ecda13e-kube-api-access-d6kxh\") pod \"service-ca-operator-777779d784-9bx8l\" (UID: \"4685134b-7bce-4398-8492-74670ecda13e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bx8l" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912057 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/47d812b6-dea3-420d-8cc4-68ba78877940-signing-cabundle\") pod \"service-ca-9c57cc56f-pwrdz\" (UID: \"47d812b6-dea3-420d-8cc4-68ba78877940\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwrdz" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912080 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-client-ca\") pod \"controller-manager-879f6c89f-dfsn8\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912101 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-serving-cert\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912123 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c13ef47-e6a6-421d-9ff3-7438aa746faf-proxy-tls\") pod \"machine-config-operator-74547568cd-bsrgx\" (UID: \"9c13ef47-e6a6-421d-9ff3-7438aa746faf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912143 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/79fa92ce-d201-4f86-b3b8-3311def2e2cf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gskgn\" (UID: \"79fa92ce-d201-4f86-b3b8-3311def2e2cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912167 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6qjg\" (UniqueName: \"kubernetes.io/projected/ab8cbc1d-68e1-40c7-a280-4852974cf941-kube-api-access-j6qjg\") pod \"machine-approver-56656f9798-5kkx6\" (UID: \"ab8cbc1d-68e1-40c7-a280-4852974cf941\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912189 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-etcd-ca\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912210 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9c13ef47-e6a6-421d-9ff3-7438aa746faf-images\") pod \"machine-config-operator-74547568cd-bsrgx\" (UID: \"9c13ef47-e6a6-421d-9ff3-7438aa746faf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912235 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ab8cbc1d-68e1-40c7-a280-4852974cf941-auth-proxy-config\") pod \"machine-approver-56656f9798-5kkx6\" (UID: \"ab8cbc1d-68e1-40c7-a280-4852974cf941\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912259 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-audit-policies\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912281 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcph2\" (UniqueName: \"kubernetes.io/projected/c3b4caa8-cc12-4739-8e10-d88cd9d4137d-kube-api-access-bcph2\") pod \"kube-storage-version-migrator-operator-b67b599dd-lt62z\" (UID: \"c3b4caa8-cc12-4739-8e10-d88cd9d4137d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lt62z" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912304 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912321 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e5355a57-e5cd-4f37-8a4a-d416e1584c4c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j7vsb\" (UID: \"e5355a57-e5cd-4f37-8a4a-d416e1584c4c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912351 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srvzv\" (UniqueName: \"kubernetes.io/projected/f3a13dff-3c0c-4151-9514-42c40e8bc83f-kube-api-access-srvzv\") pod \"marketplace-operator-79b997595-hqtnq\" (UID: \"f3a13dff-3c0c-4151-9514-42c40e8bc83f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912377 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e0fe8fc-205b-4d60-8849-f624e26034ab-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cw9mz\" (UID: \"9e0fe8fc-205b-4d60-8849-f624e26034ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cw9mz" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912402 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk98p\" (UniqueName: \"kubernetes.io/projected/060543b5-b830-412a-916a-0456db20f1ca-kube-api-access-xk98p\") pod \"openshift-controller-manager-operator-756b6f6bc6-cchz7\" (UID: \"060543b5-b830-412a-916a-0456db20f1ca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cchz7" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912431 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3a13dff-3c0c-4151-9514-42c40e8bc83f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hqtnq\" (UID: \"f3a13dff-3c0c-4151-9514-42c40e8bc83f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912451 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf4d2d7a-3f37-4c3c-8895-27aef30652af-cert\") pod \"ingress-canary-q4h7x\" (UID: \"bf4d2d7a-3f37-4c3c-8895-27aef30652af\") " pod="openshift-ingress-canary/ingress-canary-q4h7x" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.912450 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e72e27f3-de8a-4763-a3ee-cf8ea1531909-config\") pod \"kube-controller-manager-operator-78b949d7b-grmsk\" (UID: \"e72e27f3-de8a-4763-a3ee-cf8ea1531909\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grmsk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.914271 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-client-ca\") pod \"controller-manager-879f6c89f-dfsn8\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.914335 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-config\") pod \"controller-manager-879f6c89f-dfsn8\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.916668 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.917626 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e0fe8fc-205b-4d60-8849-f624e26034ab-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cw9mz\" (UID: \"9e0fe8fc-205b-4d60-8849-f624e26034ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cw9mz" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.917927 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ab8cbc1d-68e1-40c7-a280-4852974cf941-auth-proxy-config\") pod \"machine-approver-56656f9798-5kkx6\" (UID: \"ab8cbc1d-68e1-40c7-a280-4852974cf941\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.918301 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-audit-policies\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.918417 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-config\") pod \"controller-manager-879f6c89f-dfsn8\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.918778 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/060543b5-b830-412a-916a-0456db20f1ca-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cchz7\" (UID: \"060543b5-b830-412a-916a-0456db20f1ca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cchz7" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.930219 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/56d9ce8c-65f4-4482-860f-a7009c96e356-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.930285 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a79e2c5-003a-4929-8ba1-568e8ca6bb01-webhook-cert\") pod \"packageserver-d55dfcdfc-64rrj\" (UID: \"8a79e2c5-003a-4929-8ba1-568e8ca6bb01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.919596 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.919177 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-audit-dir\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.920147 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-etcd-ca\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.921945 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.923354 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bece8729-924c-4595-88ee-ddcb1873b643-metrics-tls\") pod \"ingress-operator-5b745b69d9-7zfvk\" (UID: \"bece8729-924c-4595-88ee-ddcb1873b643\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.920508 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f75a6197-9de8-4720-af31-ebc12fe35e48-client-ca\") pod \"route-controller-manager-6576b87f9c-hpq4k\" (UID: \"f75a6197-9de8-4720-af31-ebc12fe35e48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.919069 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-serving-cert\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.930882 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e72e27f3-de8a-4763-a3ee-cf8ea1531909-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-grmsk\" (UID: \"e72e27f3-de8a-4763-a3ee-cf8ea1531909\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grmsk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.931464 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjwkf\" (UniqueName: \"kubernetes.io/projected/8a79e2c5-003a-4929-8ba1-568e8ca6bb01-kube-api-access-xjwkf\") pod \"packageserver-d55dfcdfc-64rrj\" (UID: \"8a79e2c5-003a-4929-8ba1-568e8ca6bb01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.933007 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1fcc66d-6926-48a3-9473-7539f6e50415-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7mf86\" (UID: \"d1fcc66d-6926-48a3-9473-7539f6e50415\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7mf86" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.933045 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f110819e-9e33-4cf3-85b0-b92eaaaa223b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnr9c\" (UID: \"f110819e-9e33-4cf3-85b0-b92eaaaa223b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnr9c" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.933504 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-oauth-serving-cert\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.933533 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5e696527-7a38-49ea-8517-9f286a7daff0-csi-data-dir\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.933559 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d00aa552-e700-4bec-9818-3084ac601a92-images\") pod \"machine-api-operator-5694c8668f-l89zw\" (UID: \"d00aa552-e700-4bec-9818-3084ac601a92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.933582 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.933635 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.933657 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e0fe8fc-205b-4d60-8849-f624e26034ab-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cw9mz\" (UID: \"9e0fe8fc-205b-4d60-8849-f624e26034ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cw9mz" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.934733 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78z6p\" (UniqueName: \"kubernetes.io/projected/576eeef9-fcf9-4db0-a0cc-4083e03277f6-kube-api-access-78z6p\") pod \"collect-profiles-29400915-zd9vl\" (UID: \"576eeef9-fcf9-4db0-a0cc-4083e03277f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.934799 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-config\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.934844 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a79e2c5-003a-4929-8ba1-568e8ca6bb01-tmpfs\") pod \"packageserver-d55dfcdfc-64rrj\" (UID: \"8a79e2c5-003a-4929-8ba1-568e8ca6bb01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.934864 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-oauth-config\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.934895 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-config\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.934913 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5htxj\" (UniqueName: \"kubernetes.io/projected/677ed5f9-0fae-4009-b50e-ede07073d251-kube-api-access-5htxj\") pod \"migrator-59844c95c7-x6762\" (UID: \"677ed5f9-0fae-4009-b50e-ede07073d251\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x6762" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.934935 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2b80cda5-b011-4b59-869f-67ed66fe1a5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8hqj6\" (UID: \"2b80cda5-b011-4b59-869f-67ed66fe1a5a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.934954 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f75a6197-9de8-4720-af31-ebc12fe35e48-config\") pod \"route-controller-manager-6576b87f9c-hpq4k\" (UID: \"f75a6197-9de8-4720-af31-ebc12fe35e48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.934973 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf8n6\" (UniqueName: \"kubernetes.io/projected/d00aa552-e700-4bec-9818-3084ac601a92-kube-api-access-tf8n6\") pod \"machine-api-operator-5694c8668f-l89zw\" (UID: \"d00aa552-e700-4bec-9818-3084ac601a92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.934990 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d00aa552-e700-4bec-9818-3084ac601a92-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-l89zw\" (UID: \"d00aa552-e700-4bec-9818-3084ac601a92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935006 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bece8729-924c-4595-88ee-ddcb1873b643-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7zfvk\" (UID: \"bece8729-924c-4595-88ee-ddcb1873b643\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935022 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a79e2c5-003a-4929-8ba1-568e8ca6bb01-apiservice-cert\") pod \"packageserver-d55dfcdfc-64rrj\" (UID: \"8a79e2c5-003a-4929-8ba1-568e8ca6bb01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935053 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5e696527-7a38-49ea-8517-9f286a7daff0-socket-dir\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935072 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9vt4\" (UniqueName: \"kubernetes.io/projected/38747ab5-dbdb-4c02-bb57-1a0f6f35f1b9-kube-api-access-t9vt4\") pod \"multus-admission-controller-857f4d67dd-kglrl\" (UID: \"38747ab5-dbdb-4c02-bb57-1a0f6f35f1b9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kglrl" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935093 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1fcc66d-6926-48a3-9473-7539f6e50415-config\") pod \"kube-apiserver-operator-766d6c64bb-7mf86\" (UID: \"d1fcc66d-6926-48a3-9473-7539f6e50415\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7mf86" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935115 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-bound-sa-token\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935132 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935150 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935184 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d25605d1-1ee0-4c4b-a282-9986dace43f5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd5w5\" (UID: \"d25605d1-1ee0-4c4b-a282-9986dace43f5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd5w5" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935379 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/525278dd-e0d0-44bb-ba09-c4e0b82a268f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8w2c6\" (UID: \"525278dd-e0d0-44bb-ba09-c4e0b82a268f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8w2c6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935453 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fphv\" (UniqueName: \"kubernetes.io/projected/47d812b6-dea3-420d-8cc4-68ba78877940-kube-api-access-7fphv\") pod \"service-ca-9c57cc56f-pwrdz\" (UID: \"47d812b6-dea3-420d-8cc4-68ba78877940\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwrdz" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935524 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xqk2\" (UniqueName: \"kubernetes.io/projected/79fa92ce-d201-4f86-b3b8-3311def2e2cf-kube-api-access-2xqk2\") pod \"machine-config-controller-84d6567774-gskgn\" (UID: \"79fa92ce-d201-4f86-b3b8-3311def2e2cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935541 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96a3e8da-2eb3-472d-84f8-819566b5dbcf-config-volume\") pod \"dns-default-n7xt5\" (UID: \"96a3e8da-2eb3-472d-84f8-819566b5dbcf\") " pod="openshift-dns/dns-default-n7xt5" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935583 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96a3e8da-2eb3-472d-84f8-819566b5dbcf-metrics-tls\") pod \"dns-default-n7xt5\" (UID: \"96a3e8da-2eb3-472d-84f8-819566b5dbcf\") " pod="openshift-dns/dns-default-n7xt5" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935618 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935651 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxgxl\" (UniqueName: \"kubernetes.io/projected/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-kube-api-access-fxgxl\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935670 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/47d812b6-dea3-420d-8cc4-68ba78877940-signing-key\") pod \"service-ca-9c57cc56f-pwrdz\" (UID: \"47d812b6-dea3-420d-8cc4-68ba78877940\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwrdz" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935693 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qdzb\" (UniqueName: \"kubernetes.io/projected/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-kube-api-access-5qdzb\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935710 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935882 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5d6fe950-18fc-440b-ad82-014a34669117-srv-cert\") pod \"catalog-operator-68c6474976-9rfgk\" (UID: \"5d6fe950-18fc-440b-ad82-014a34669117\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935916 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935943 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d00aa552-e700-4bec-9818-3084ac601a92-config\") pod \"machine-api-operator-5694c8668f-l89zw\" (UID: \"d00aa552-e700-4bec-9818-3084ac601a92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935960 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-trusted-ca-bundle\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935976 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m926g\" (UniqueName: \"kubernetes.io/projected/525278dd-e0d0-44bb-ba09-c4e0b82a268f-kube-api-access-m926g\") pod \"package-server-manager-789f6589d5-8w2c6\" (UID: \"525278dd-e0d0-44bb-ba09-c4e0b82a268f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8w2c6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935992 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/576eeef9-fcf9-4db0-a0cc-4083e03277f6-config-volume\") pod \"collect-profiles-29400915-zd9vl\" (UID: \"576eeef9-fcf9-4db0-a0cc-4083e03277f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936008 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bece8729-924c-4595-88ee-ddcb1873b643-trusted-ca\") pod \"ingress-operator-5b745b69d9-7zfvk\" (UID: \"bece8729-924c-4595-88ee-ddcb1873b643\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936023 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4685134b-7bce-4398-8492-74670ecda13e-serving-cert\") pod \"service-ca-operator-777779d784-9bx8l\" (UID: \"4685134b-7bce-4398-8492-74670ecda13e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bx8l" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936043 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-729pp\" (UniqueName: \"kubernetes.io/projected/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-kube-api-access-729pp\") pod \"controller-manager-879f6c89f-dfsn8\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936071 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97sch\" (UniqueName: \"kubernetes.io/projected/b18ece39-f2f5-41f9-b2e1-79f9f880791b-kube-api-access-97sch\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936098 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-etcd-service-ca\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936114 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d25605d1-1ee0-4c4b-a282-9986dace43f5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd5w5\" (UID: \"d25605d1-1ee0-4c4b-a282-9986dace43f5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd5w5" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936129 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d25605d1-1ee0-4c4b-a282-9986dace43f5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd5w5\" (UID: \"d25605d1-1ee0-4c4b-a282-9986dace43f5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd5w5" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936147 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dfsn8\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936164 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5e696527-7a38-49ea-8517-9f286a7daff0-registration-dir\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936180 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2tsv\" (UniqueName: \"kubernetes.io/projected/5e696527-7a38-49ea-8517-9f286a7daff0-kube-api-access-d2tsv\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936195 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38747ab5-dbdb-4c02-bb57-1a0f6f35f1b9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kglrl\" (UID: \"38747ab5-dbdb-4c02-bb57-1a0f6f35f1b9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kglrl" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936214 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/060543b5-b830-412a-916a-0456db20f1ca-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cchz7\" (UID: \"060543b5-b830-412a-916a-0456db20f1ca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cchz7" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936239 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b80cda5-b011-4b59-869f-67ed66fe1a5a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8hqj6\" (UID: \"2b80cda5-b011-4b59-869f-67ed66fe1a5a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936257 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9c13ef47-e6a6-421d-9ff3-7438aa746faf-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bsrgx\" (UID: \"9c13ef47-e6a6-421d-9ff3-7438aa746faf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936275 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ssxl\" (UniqueName: \"kubernetes.io/projected/bece8729-924c-4595-88ee-ddcb1873b643-kube-api-access-8ssxl\") pod \"ingress-operator-5b745b69d9-7zfvk\" (UID: \"bece8729-924c-4595-88ee-ddcb1873b643\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936291 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4crt\" (UniqueName: \"kubernetes.io/projected/f110819e-9e33-4cf3-85b0-b92eaaaa223b-kube-api-access-p4crt\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnr9c\" (UID: \"f110819e-9e33-4cf3-85b0-b92eaaaa223b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnr9c" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936307 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4685134b-7bce-4398-8492-74670ecda13e-config\") pod \"service-ca-operator-777779d784-9bx8l\" (UID: \"4685134b-7bce-4398-8492-74670ecda13e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bx8l" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936329 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e5355a57-e5cd-4f37-8a4a-d416e1584c4c-srv-cert\") pod \"olm-operator-6b444d44fb-j7vsb\" (UID: \"e5355a57-e5cd-4f37-8a4a-d416e1584c4c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936354 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-serving-cert\") pod \"controller-manager-879f6c89f-dfsn8\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936387 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc0571cc-8090-4490-af8a-8199e9f983a9-metrics-tls\") pod \"dns-operator-744455d44c-wz5hw\" (UID: \"dc0571cc-8090-4490-af8a-8199e9f983a9\") " pod="openshift-dns-operator/dns-operator-744455d44c-wz5hw" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936405 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e72e27f3-de8a-4763-a3ee-cf8ea1531909-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-grmsk\" (UID: \"e72e27f3-de8a-4763-a3ee-cf8ea1531909\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grmsk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936414 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f75a6197-9de8-4720-af31-ebc12fe35e48-config\") pod \"route-controller-manager-6576b87f9c-hpq4k\" (UID: \"f75a6197-9de8-4720-af31-ebc12fe35e48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936420 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px6d4\" (UniqueName: \"kubernetes.io/projected/9c13ef47-e6a6-421d-9ff3-7438aa746faf-kube-api-access-px6d4\") pod \"machine-config-operator-74547568cd-bsrgx\" (UID: \"9c13ef47-e6a6-421d-9ff3-7438aa746faf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936495 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/576eeef9-fcf9-4db0-a0cc-4083e03277f6-secret-volume\") pod \"collect-profiles-29400915-zd9vl\" (UID: \"576eeef9-fcf9-4db0-a0cc-4083e03277f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936527 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5e696527-7a38-49ea-8517-9f286a7daff0-mountpoint-dir\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936556 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-etcd-client\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936588 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jt99\" (UniqueName: \"kubernetes.io/projected/dc0571cc-8090-4490-af8a-8199e9f983a9-kube-api-access-5jt99\") pod \"dns-operator-744455d44c-wz5hw\" (UID: \"dc0571cc-8090-4490-af8a-8199e9f983a9\") " pod="openshift-dns-operator/dns-operator-744455d44c-wz5hw" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936644 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936675 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f75a6197-9de8-4720-af31-ebc12fe35e48-serving-cert\") pod \"route-controller-manager-6576b87f9c-hpq4k\" (UID: \"f75a6197-9de8-4720-af31-ebc12fe35e48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936704 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z64w7\" (UniqueName: \"kubernetes.io/projected/fc143568-8b91-44a5-9fff-5890b6c29e0c-kube-api-access-z64w7\") pod \"machine-config-server-rp8sr\" (UID: \"fc143568-8b91-44a5-9fff-5890b6c29e0c\") " pod="openshift-machine-config-operator/machine-config-server-rp8sr" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936764 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdp74\" (UniqueName: \"kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-kube-api-access-bdp74\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936790 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1fcc66d-6926-48a3-9473-7539f6e50415-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7mf86\" (UID: \"d1fcc66d-6926-48a3-9473-7539f6e50415\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7mf86" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936813 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/79fa92ce-d201-4f86-b3b8-3311def2e2cf-proxy-tls\") pod \"machine-config-controller-84d6567774-gskgn\" (UID: \"79fa92ce-d201-4f86-b3b8-3311def2e2cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936837 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7gjh\" (UniqueName: \"kubernetes.io/projected/5d6fe950-18fc-440b-ad82-014a34669117-kube-api-access-l7gjh\") pod \"catalog-operator-68c6474976-9rfgk\" (UID: \"5d6fe950-18fc-440b-ad82-014a34669117\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936865 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnrqw\" (UniqueName: \"kubernetes.io/projected/2b80cda5-b011-4b59-869f-67ed66fe1a5a-kube-api-access-cnrqw\") pod \"cluster-image-registry-operator-dc59b4c8b-8hqj6\" (UID: \"2b80cda5-b011-4b59-869f-67ed66fe1a5a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936888 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3b4caa8-cc12-4739-8e10-d88cd9d4137d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lt62z\" (UID: \"c3b4caa8-cc12-4739-8e10-d88cd9d4137d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lt62z" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936909 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5e696527-7a38-49ea-8517-9f286a7daff0-plugins-dir\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936936 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ab8cbc1d-68e1-40c7-a280-4852974cf941-machine-approver-tls\") pod \"machine-approver-56656f9798-5kkx6\" (UID: \"ab8cbc1d-68e1-40c7-a280-4852974cf941\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.936958 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjr9g\" (UniqueName: \"kubernetes.io/projected/bf4d2d7a-3f37-4c3c-8895-27aef30652af-kube-api-access-vjr9g\") pod \"ingress-canary-q4h7x\" (UID: \"bf4d2d7a-3f37-4c3c-8895-27aef30652af\") " pod="openshift-ingress-canary/ingress-canary-q4h7x" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.937005 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.937048 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-serving-cert\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.937072 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.937115 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxsw5\" (UniqueName: \"kubernetes.io/projected/e5355a57-e5cd-4f37-8a4a-d416e1584c4c-kube-api-access-sxsw5\") pod \"olm-operator-6b444d44fb-j7vsb\" (UID: \"e5355a57-e5cd-4f37-8a4a-d416e1584c4c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.937140 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fc143568-8b91-44a5-9fff-5890b6c29e0c-node-bootstrap-token\") pod \"machine-config-server-rp8sr\" (UID: \"fc143568-8b91-44a5-9fff-5890b6c29e0c\") " pod="openshift-machine-config-operator/machine-config-server-rp8sr" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.937165 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab8cbc1d-68e1-40c7-a280-4852974cf941-config\") pod \"machine-approver-56656f9798-5kkx6\" (UID: \"ab8cbc1d-68e1-40c7-a280-4852974cf941\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.937185 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-service-ca\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.937209 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88jlv\" (UniqueName: \"kubernetes.io/projected/9e0fe8fc-205b-4d60-8849-f624e26034ab-kube-api-access-88jlv\") pod \"openshift-apiserver-operator-796bbdcf4f-cw9mz\" (UID: \"9e0fe8fc-205b-4d60-8849-f624e26034ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cw9mz" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.937233 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs4fr\" (UniqueName: \"kubernetes.io/projected/96a3e8da-2eb3-472d-84f8-819566b5dbcf-kube-api-access-vs4fr\") pod \"dns-default-n7xt5\" (UID: \"96a3e8da-2eb3-472d-84f8-819566b5dbcf\") " pod="openshift-dns/dns-default-n7xt5" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.934593 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d00aa552-e700-4bec-9818-3084ac601a92-images\") pod \"machine-api-operator-5694c8668f-l89zw\" (UID: \"d00aa552-e700-4bec-9818-3084ac601a92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" Nov 25 07:18:08 crc kubenswrapper[5043]: E1125 07:18:08.937961 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:09.437946021 +0000 UTC m=+153.606141742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.938147 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.935418 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-oauth-serving-cert\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.938935 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e72e27f3-de8a-4763-a3ee-cf8ea1531909-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-grmsk\" (UID: \"e72e27f3-de8a-4763-a3ee-cf8ea1531909\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grmsk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.939007 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.939115 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/56d9ce8c-65f4-4482-860f-a7009c96e356-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.939216 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-config\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.939683 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab8cbc1d-68e1-40c7-a280-4852974cf941-config\") pod \"machine-approver-56656f9798-5kkx6\" (UID: \"ab8cbc1d-68e1-40c7-a280-4852974cf941\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.941244 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2b80cda5-b011-4b59-869f-67ed66fe1a5a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8hqj6\" (UID: \"2b80cda5-b011-4b59-869f-67ed66fe1a5a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.941510 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.941881 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-serving-cert\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.942118 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.942540 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.942940 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.943247 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-config\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.943488 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-service-ca\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.943710 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-etcd-client\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.945660 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d00aa552-e700-4bec-9818-3084ac601a92-config\") pod \"machine-api-operator-5694c8668f-l89zw\" (UID: \"d00aa552-e700-4bec-9818-3084ac601a92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.946099 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/060543b5-b830-412a-916a-0456db20f1ca-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cchz7\" (UID: \"060543b5-b830-412a-916a-0456db20f1ca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cchz7" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.946338 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-oauth-config\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.946415 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-trusted-ca-bundle\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.946565 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d00aa552-e700-4bec-9818-3084ac601a92-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-l89zw\" (UID: \"d00aa552-e700-4bec-9818-3084ac601a92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.946800 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e0fe8fc-205b-4d60-8849-f624e26034ab-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cw9mz\" (UID: \"9e0fe8fc-205b-4d60-8849-f624e26034ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cw9mz" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.946971 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ab8cbc1d-68e1-40c7-a280-4852974cf941-machine-approver-tls\") pod \"machine-approver-56656f9798-5kkx6\" (UID: \"ab8cbc1d-68e1-40c7-a280-4852974cf941\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.947007 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.947040 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-etcd-service-ca\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.947104 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.947311 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bece8729-924c-4595-88ee-ddcb1873b643-trusted-ca\") pod \"ingress-operator-5b745b69d9-7zfvk\" (UID: \"bece8729-924c-4595-88ee-ddcb1873b643\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.947935 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc0571cc-8090-4490-af8a-8199e9f983a9-metrics-tls\") pod \"dns-operator-744455d44c-wz5hw\" (UID: \"dc0571cc-8090-4490-af8a-8199e9f983a9\") " pod="openshift-dns-operator/dns-operator-744455d44c-wz5hw" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.948397 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f75a6197-9de8-4720-af31-ebc12fe35e48-serving-cert\") pod \"route-controller-manager-6576b87f9c-hpq4k\" (UID: \"f75a6197-9de8-4720-af31-ebc12fe35e48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.948697 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2b80cda5-b011-4b59-869f-67ed66fe1a5a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8hqj6\" (UID: \"2b80cda5-b011-4b59-869f-67ed66fe1a5a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.951038 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.956753 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-serving-cert\") pod \"controller-manager-879f6c89f-dfsn8\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.957292 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dfsn8\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.978322 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2b80cda5-b011-4b59-869f-67ed66fe1a5a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8hqj6\" (UID: \"2b80cda5-b011-4b59-869f-67ed66fe1a5a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6" Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.979662 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jxz26"] Nov 25 07:18:08 crc kubenswrapper[5043]: W1125 07:18:08.986966 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81f790c4_a6b8_4bb7_8a46_107e7ad04689.slice/crio-18953a0e7030a753f942b85be4d3fd8e9270eb3e81f7c9755bd7f91d8c3fb415 WatchSource:0}: Error finding container 18953a0e7030a753f942b85be4d3fd8e9270eb3e81f7c9755bd7f91d8c3fb415: Status 404 returned error can't find the container with id 18953a0e7030a753f942b85be4d3fd8e9270eb3e81f7c9755bd7f91d8c3fb415 Nov 25 07:18:08 crc kubenswrapper[5043]: I1125 07:18:08.991834 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj9z5\" (UniqueName: \"kubernetes.io/projected/f75a6197-9de8-4720-af31-ebc12fe35e48-kube-api-access-wj9z5\") pod \"route-controller-manager-6576b87f9c-hpq4k\" (UID: \"f75a6197-9de8-4720-af31-ebc12fe35e48\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.012969 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6qjg\" (UniqueName: \"kubernetes.io/projected/ab8cbc1d-68e1-40c7-a280-4852974cf941-kube-api-access-j6qjg\") pod \"machine-approver-56656f9798-5kkx6\" (UID: \"ab8cbc1d-68e1-40c7-a280-4852974cf941\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.031989 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk98p\" (UniqueName: \"kubernetes.io/projected/060543b5-b830-412a-916a-0456db20f1ca-kube-api-access-xk98p\") pod \"openshift-controller-manager-operator-756b6f6bc6-cchz7\" (UID: \"060543b5-b830-412a-916a-0456db20f1ca\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cchz7" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.038437 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:09 crc kubenswrapper[5043]: E1125 07:18:09.038578 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:09.538558942 +0000 UTC m=+153.706754663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.038723 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcph2\" (UniqueName: \"kubernetes.io/projected/c3b4caa8-cc12-4739-8e10-d88cd9d4137d-kube-api-access-bcph2\") pod \"kube-storage-version-migrator-operator-b67b599dd-lt62z\" (UID: \"c3b4caa8-cc12-4739-8e10-d88cd9d4137d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lt62z" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.038767 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e5355a57-e5cd-4f37-8a4a-d416e1584c4c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j7vsb\" (UID: \"e5355a57-e5cd-4f37-8a4a-d416e1584c4c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.038791 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srvzv\" (UniqueName: \"kubernetes.io/projected/f3a13dff-3c0c-4151-9514-42c40e8bc83f-kube-api-access-srvzv\") pod \"marketplace-operator-79b997595-hqtnq\" (UID: \"f3a13dff-3c0c-4151-9514-42c40e8bc83f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.038815 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3a13dff-3c0c-4151-9514-42c40e8bc83f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hqtnq\" (UID: \"f3a13dff-3c0c-4151-9514-42c40e8bc83f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.038838 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf4d2d7a-3f37-4c3c-8895-27aef30652af-cert\") pod \"ingress-canary-q4h7x\" (UID: \"bf4d2d7a-3f37-4c3c-8895-27aef30652af\") " pod="openshift-ingress-canary/ingress-canary-q4h7x" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.038866 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a79e2c5-003a-4929-8ba1-568e8ca6bb01-webhook-cert\") pod \"packageserver-d55dfcdfc-64rrj\" (UID: \"8a79e2c5-003a-4929-8ba1-568e8ca6bb01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.038887 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjwkf\" (UniqueName: \"kubernetes.io/projected/8a79e2c5-003a-4929-8ba1-568e8ca6bb01-kube-api-access-xjwkf\") pod \"packageserver-d55dfcdfc-64rrj\" (UID: \"8a79e2c5-003a-4929-8ba1-568e8ca6bb01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.038908 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1fcc66d-6926-48a3-9473-7539f6e50415-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7mf86\" (UID: \"d1fcc66d-6926-48a3-9473-7539f6e50415\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7mf86" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.038935 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f110819e-9e33-4cf3-85b0-b92eaaaa223b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnr9c\" (UID: \"f110819e-9e33-4cf3-85b0-b92eaaaa223b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnr9c" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.038960 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5e696527-7a38-49ea-8517-9f286a7daff0-csi-data-dir\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.038987 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78z6p\" (UniqueName: \"kubernetes.io/projected/576eeef9-fcf9-4db0-a0cc-4083e03277f6-kube-api-access-78z6p\") pod \"collect-profiles-29400915-zd9vl\" (UID: \"576eeef9-fcf9-4db0-a0cc-4083e03277f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039011 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a79e2c5-003a-4929-8ba1-568e8ca6bb01-tmpfs\") pod \"packageserver-d55dfcdfc-64rrj\" (UID: \"8a79e2c5-003a-4929-8ba1-568e8ca6bb01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039037 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5htxj\" (UniqueName: \"kubernetes.io/projected/677ed5f9-0fae-4009-b50e-ede07073d251-kube-api-access-5htxj\") pod \"migrator-59844c95c7-x6762\" (UID: \"677ed5f9-0fae-4009-b50e-ede07073d251\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x6762" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039093 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a79e2c5-003a-4929-8ba1-568e8ca6bb01-apiservice-cert\") pod \"packageserver-d55dfcdfc-64rrj\" (UID: \"8a79e2c5-003a-4929-8ba1-568e8ca6bb01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039117 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5e696527-7a38-49ea-8517-9f286a7daff0-socket-dir\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039141 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9vt4\" (UniqueName: \"kubernetes.io/projected/38747ab5-dbdb-4c02-bb57-1a0f6f35f1b9-kube-api-access-t9vt4\") pod \"multus-admission-controller-857f4d67dd-kglrl\" (UID: \"38747ab5-dbdb-4c02-bb57-1a0f6f35f1b9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kglrl" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039167 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1fcc66d-6926-48a3-9473-7539f6e50415-config\") pod \"kube-apiserver-operator-766d6c64bb-7mf86\" (UID: \"d1fcc66d-6926-48a3-9473-7539f6e50415\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7mf86" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039192 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d25605d1-1ee0-4c4b-a282-9986dace43f5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd5w5\" (UID: \"d25605d1-1ee0-4c4b-a282-9986dace43f5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd5w5" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039223 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fphv\" (UniqueName: \"kubernetes.io/projected/47d812b6-dea3-420d-8cc4-68ba78877940-kube-api-access-7fphv\") pod \"service-ca-9c57cc56f-pwrdz\" (UID: \"47d812b6-dea3-420d-8cc4-68ba78877940\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwrdz" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039247 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xqk2\" (UniqueName: \"kubernetes.io/projected/79fa92ce-d201-4f86-b3b8-3311def2e2cf-kube-api-access-2xqk2\") pod \"machine-config-controller-84d6567774-gskgn\" (UID: \"79fa92ce-d201-4f86-b3b8-3311def2e2cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039269 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/525278dd-e0d0-44bb-ba09-c4e0b82a268f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8w2c6\" (UID: \"525278dd-e0d0-44bb-ba09-c4e0b82a268f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8w2c6" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039291 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96a3e8da-2eb3-472d-84f8-819566b5dbcf-config-volume\") pod \"dns-default-n7xt5\" (UID: \"96a3e8da-2eb3-472d-84f8-819566b5dbcf\") " pod="openshift-dns/dns-default-n7xt5" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039313 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96a3e8da-2eb3-472d-84f8-819566b5dbcf-metrics-tls\") pod \"dns-default-n7xt5\" (UID: \"96a3e8da-2eb3-472d-84f8-819566b5dbcf\") " pod="openshift-dns/dns-default-n7xt5" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039414 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/47d812b6-dea3-420d-8cc4-68ba78877940-signing-key\") pod \"service-ca-9c57cc56f-pwrdz\" (UID: \"47d812b6-dea3-420d-8cc4-68ba78877940\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwrdz" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039417 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/5e696527-7a38-49ea-8517-9f286a7daff0-csi-data-dir\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039441 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5d6fe950-18fc-440b-ad82-014a34669117-srv-cert\") pod \"catalog-operator-68c6474976-9rfgk\" (UID: \"5d6fe950-18fc-440b-ad82-014a34669117\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039488 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m926g\" (UniqueName: \"kubernetes.io/projected/525278dd-e0d0-44bb-ba09-c4e0b82a268f-kube-api-access-m926g\") pod \"package-server-manager-789f6589d5-8w2c6\" (UID: \"525278dd-e0d0-44bb-ba09-c4e0b82a268f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8w2c6" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039515 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/576eeef9-fcf9-4db0-a0cc-4083e03277f6-config-volume\") pod \"collect-profiles-29400915-zd9vl\" (UID: \"576eeef9-fcf9-4db0-a0cc-4083e03277f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039538 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4685134b-7bce-4398-8492-74670ecda13e-serving-cert\") pod \"service-ca-operator-777779d784-9bx8l\" (UID: \"4685134b-7bce-4398-8492-74670ecda13e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bx8l" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039582 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d25605d1-1ee0-4c4b-a282-9986dace43f5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd5w5\" (UID: \"d25605d1-1ee0-4c4b-a282-9986dace43f5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd5w5" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039620 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d25605d1-1ee0-4c4b-a282-9986dace43f5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd5w5\" (UID: \"d25605d1-1ee0-4c4b-a282-9986dace43f5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd5w5" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039648 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2tsv\" (UniqueName: \"kubernetes.io/projected/5e696527-7a38-49ea-8517-9f286a7daff0-kube-api-access-d2tsv\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039670 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38747ab5-dbdb-4c02-bb57-1a0f6f35f1b9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kglrl\" (UID: \"38747ab5-dbdb-4c02-bb57-1a0f6f35f1b9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kglrl" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039723 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5e696527-7a38-49ea-8517-9f286a7daff0-registration-dir\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039742 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5e696527-7a38-49ea-8517-9f286a7daff0-socket-dir\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039751 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9c13ef47-e6a6-421d-9ff3-7438aa746faf-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bsrgx\" (UID: \"9c13ef47-e6a6-421d-9ff3-7438aa746faf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039786 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4crt\" (UniqueName: \"kubernetes.io/projected/f110819e-9e33-4cf3-85b0-b92eaaaa223b-kube-api-access-p4crt\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnr9c\" (UID: \"f110819e-9e33-4cf3-85b0-b92eaaaa223b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnr9c" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039810 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4685134b-7bce-4398-8492-74670ecda13e-config\") pod \"service-ca-operator-777779d784-9bx8l\" (UID: \"4685134b-7bce-4398-8492-74670ecda13e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bx8l" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039839 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e5355a57-e5cd-4f37-8a4a-d416e1584c4c-srv-cert\") pod \"olm-operator-6b444d44fb-j7vsb\" (UID: \"e5355a57-e5cd-4f37-8a4a-d416e1584c4c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039871 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px6d4\" (UniqueName: \"kubernetes.io/projected/9c13ef47-e6a6-421d-9ff3-7438aa746faf-kube-api-access-px6d4\") pod \"machine-config-operator-74547568cd-bsrgx\" (UID: \"9c13ef47-e6a6-421d-9ff3-7438aa746faf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039878 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8a79e2c5-003a-4929-8ba1-568e8ca6bb01-tmpfs\") pod \"packageserver-d55dfcdfc-64rrj\" (UID: \"8a79e2c5-003a-4929-8ba1-568e8ca6bb01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.040085 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d25605d1-1ee0-4c4b-a282-9986dace43f5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd5w5\" (UID: \"d25605d1-1ee0-4c4b-a282-9986dace43f5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd5w5" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.040563 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96a3e8da-2eb3-472d-84f8-819566b5dbcf-config-volume\") pod \"dns-default-n7xt5\" (UID: \"96a3e8da-2eb3-472d-84f8-819566b5dbcf\") " pod="openshift-dns/dns-default-n7xt5" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.040777 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1fcc66d-6926-48a3-9473-7539f6e50415-config\") pod \"kube-apiserver-operator-766d6c64bb-7mf86\" (UID: \"d1fcc66d-6926-48a3-9473-7539f6e50415\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7mf86" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.039895 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/576eeef9-fcf9-4db0-a0cc-4083e03277f6-secret-volume\") pod \"collect-profiles-29400915-zd9vl\" (UID: \"576eeef9-fcf9-4db0-a0cc-4083e03277f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.040830 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5e696527-7a38-49ea-8517-9f286a7daff0-mountpoint-dir\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.040859 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z64w7\" (UniqueName: \"kubernetes.io/projected/fc143568-8b91-44a5-9fff-5890b6c29e0c-kube-api-access-z64w7\") pod \"machine-config-server-rp8sr\" (UID: \"fc143568-8b91-44a5-9fff-5890b6c29e0c\") " pod="openshift-machine-config-operator/machine-config-server-rp8sr" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.040883 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1fcc66d-6926-48a3-9473-7539f6e50415-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7mf86\" (UID: \"d1fcc66d-6926-48a3-9473-7539f6e50415\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7mf86" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.040905 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/79fa92ce-d201-4f86-b3b8-3311def2e2cf-proxy-tls\") pod \"machine-config-controller-84d6567774-gskgn\" (UID: \"79fa92ce-d201-4f86-b3b8-3311def2e2cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.040941 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7gjh\" (UniqueName: \"kubernetes.io/projected/5d6fe950-18fc-440b-ad82-014a34669117-kube-api-access-l7gjh\") pod \"catalog-operator-68c6474976-9rfgk\" (UID: \"5d6fe950-18fc-440b-ad82-014a34669117\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.040965 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3b4caa8-cc12-4739-8e10-d88cd9d4137d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lt62z\" (UID: \"c3b4caa8-cc12-4739-8e10-d88cd9d4137d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lt62z" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.040980 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5e696527-7a38-49ea-8517-9f286a7daff0-plugins-dir\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.041005 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjr9g\" (UniqueName: \"kubernetes.io/projected/bf4d2d7a-3f37-4c3c-8895-27aef30652af-kube-api-access-vjr9g\") pod \"ingress-canary-q4h7x\" (UID: \"bf4d2d7a-3f37-4c3c-8895-27aef30652af\") " pod="openshift-ingress-canary/ingress-canary-q4h7x" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.041033 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.041048 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/5e696527-7a38-49ea-8517-9f286a7daff0-mountpoint-dir\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.041056 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxsw5\" (UniqueName: \"kubernetes.io/projected/e5355a57-e5cd-4f37-8a4a-d416e1584c4c-kube-api-access-sxsw5\") pod \"olm-operator-6b444d44fb-j7vsb\" (UID: \"e5355a57-e5cd-4f37-8a4a-d416e1584c4c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.041162 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fc143568-8b91-44a5-9fff-5890b6c29e0c-node-bootstrap-token\") pod \"machine-config-server-rp8sr\" (UID: \"fc143568-8b91-44a5-9fff-5890b6c29e0c\") " pod="openshift-machine-config-operator/machine-config-server-rp8sr" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.041196 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs4fr\" (UniqueName: \"kubernetes.io/projected/96a3e8da-2eb3-472d-84f8-819566b5dbcf-kube-api-access-vs4fr\") pod \"dns-default-n7xt5\" (UID: \"96a3e8da-2eb3-472d-84f8-819566b5dbcf\") " pod="openshift-dns/dns-default-n7xt5" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.041226 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3a13dff-3c0c-4151-9514-42c40e8bc83f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hqtnq\" (UID: \"f3a13dff-3c0c-4151-9514-42c40e8bc83f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.041247 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fc143568-8b91-44a5-9fff-5890b6c29e0c-certs\") pod \"machine-config-server-rp8sr\" (UID: \"fc143568-8b91-44a5-9fff-5890b6c29e0c\") " pod="openshift-machine-config-operator/machine-config-server-rp8sr" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.041272 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5d6fe950-18fc-440b-ad82-014a34669117-profile-collector-cert\") pod \"catalog-operator-68c6474976-9rfgk\" (UID: \"5d6fe950-18fc-440b-ad82-014a34669117\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.041296 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b4caa8-cc12-4739-8e10-d88cd9d4137d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lt62z\" (UID: \"c3b4caa8-cc12-4739-8e10-d88cd9d4137d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lt62z" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.041328 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6kxh\" (UniqueName: \"kubernetes.io/projected/4685134b-7bce-4398-8492-74670ecda13e-kube-api-access-d6kxh\") pod \"service-ca-operator-777779d784-9bx8l\" (UID: \"4685134b-7bce-4398-8492-74670ecda13e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bx8l" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.041355 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/47d812b6-dea3-420d-8cc4-68ba78877940-signing-cabundle\") pod \"service-ca-9c57cc56f-pwrdz\" (UID: \"47d812b6-dea3-420d-8cc4-68ba78877940\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwrdz" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.041384 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c13ef47-e6a6-421d-9ff3-7438aa746faf-proxy-tls\") pod \"machine-config-operator-74547568cd-bsrgx\" (UID: \"9c13ef47-e6a6-421d-9ff3-7438aa746faf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.041408 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/79fa92ce-d201-4f86-b3b8-3311def2e2cf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gskgn\" (UID: \"79fa92ce-d201-4f86-b3b8-3311def2e2cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.041435 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9c13ef47-e6a6-421d-9ff3-7438aa746faf-images\") pod \"machine-config-operator-74547568cd-bsrgx\" (UID: \"9c13ef47-e6a6-421d-9ff3-7438aa746faf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.042002 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e5355a57-e5cd-4f37-8a4a-d416e1584c4c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j7vsb\" (UID: \"e5355a57-e5cd-4f37-8a4a-d416e1584c4c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.042165 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9c13ef47-e6a6-421d-9ff3-7438aa746faf-images\") pod \"machine-config-operator-74547568cd-bsrgx\" (UID: \"9c13ef47-e6a6-421d-9ff3-7438aa746faf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.043000 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/5e696527-7a38-49ea-8517-9f286a7daff0-plugins-dir\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.043251 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b4caa8-cc12-4739-8e10-d88cd9d4137d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lt62z\" (UID: \"c3b4caa8-cc12-4739-8e10-d88cd9d4137d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lt62z" Nov 25 07:18:09 crc kubenswrapper[5043]: E1125 07:18:09.043548 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:09.543532192 +0000 UTC m=+153.711727913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.043991 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8a79e2c5-003a-4929-8ba1-568e8ca6bb01-apiservice-cert\") pod \"packageserver-d55dfcdfc-64rrj\" (UID: \"8a79e2c5-003a-4929-8ba1-568e8ca6bb01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.044413 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3a13dff-3c0c-4151-9514-42c40e8bc83f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hqtnq\" (UID: \"f3a13dff-3c0c-4151-9514-42c40e8bc83f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.045266 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/576eeef9-fcf9-4db0-a0cc-4083e03277f6-config-volume\") pod \"collect-profiles-29400915-zd9vl\" (UID: \"576eeef9-fcf9-4db0-a0cc-4083e03277f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.045389 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/47d812b6-dea3-420d-8cc4-68ba78877940-signing-cabundle\") pod \"service-ca-9c57cc56f-pwrdz\" (UID: \"47d812b6-dea3-420d-8cc4-68ba78877940\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwrdz" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.045636 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.045897 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1fcc66d-6926-48a3-9473-7539f6e50415-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7mf86\" (UID: \"d1fcc66d-6926-48a3-9473-7539f6e50415\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7mf86" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.045998 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3b4caa8-cc12-4739-8e10-d88cd9d4137d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lt62z\" (UID: \"c3b4caa8-cc12-4739-8e10-d88cd9d4137d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lt62z" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.046164 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5e696527-7a38-49ea-8517-9f286a7daff0-registration-dir\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.046847 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4685134b-7bce-4398-8492-74670ecda13e-config\") pod \"service-ca-operator-777779d784-9bx8l\" (UID: \"4685134b-7bce-4398-8492-74670ecda13e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bx8l" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.046858 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9c13ef47-e6a6-421d-9ff3-7438aa746faf-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bsrgx\" (UID: \"9c13ef47-e6a6-421d-9ff3-7438aa746faf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.047012 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/79fa92ce-d201-4f86-b3b8-3311def2e2cf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gskgn\" (UID: \"79fa92ce-d201-4f86-b3b8-3311def2e2cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.047010 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/525278dd-e0d0-44bb-ba09-c4e0b82a268f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8w2c6\" (UID: \"525278dd-e0d0-44bb-ba09-c4e0b82a268f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8w2c6" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.047371 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf4d2d7a-3f37-4c3c-8895-27aef30652af-cert\") pod \"ingress-canary-q4h7x\" (UID: \"bf4d2d7a-3f37-4c3c-8895-27aef30652af\") " pod="openshift-ingress-canary/ingress-canary-q4h7x" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.047498 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3a13dff-3c0c-4151-9514-42c40e8bc83f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hqtnq\" (UID: \"f3a13dff-3c0c-4151-9514-42c40e8bc83f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.048727 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8a79e2c5-003a-4929-8ba1-568e8ca6bb01-webhook-cert\") pod \"packageserver-d55dfcdfc-64rrj\" (UID: \"8a79e2c5-003a-4929-8ba1-568e8ca6bb01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.050080 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96a3e8da-2eb3-472d-84f8-819566b5dbcf-metrics-tls\") pod \"dns-default-n7xt5\" (UID: \"96a3e8da-2eb3-472d-84f8-819566b5dbcf\") " pod="openshift-dns/dns-default-n7xt5" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.050184 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38747ab5-dbdb-4c02-bb57-1a0f6f35f1b9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kglrl\" (UID: \"38747ab5-dbdb-4c02-bb57-1a0f6f35f1b9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kglrl" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.050328 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5d6fe950-18fc-440b-ad82-014a34669117-profile-collector-cert\") pod \"catalog-operator-68c6474976-9rfgk\" (UID: \"5d6fe950-18fc-440b-ad82-014a34669117\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.050334 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c13ef47-e6a6-421d-9ff3-7438aa746faf-proxy-tls\") pod \"machine-config-operator-74547568cd-bsrgx\" (UID: \"9c13ef47-e6a6-421d-9ff3-7438aa746faf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.050551 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/79fa92ce-d201-4f86-b3b8-3311def2e2cf-proxy-tls\") pod \"machine-config-controller-84d6567774-gskgn\" (UID: \"79fa92ce-d201-4f86-b3b8-3311def2e2cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.051060 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fc143568-8b91-44a5-9fff-5890b6c29e0c-node-bootstrap-token\") pod \"machine-config-server-rp8sr\" (UID: \"fc143568-8b91-44a5-9fff-5890b6c29e0c\") " pod="openshift-machine-config-operator/machine-config-server-rp8sr" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.051482 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d25605d1-1ee0-4c4b-a282-9986dace43f5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd5w5\" (UID: \"d25605d1-1ee0-4c4b-a282-9986dace43f5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd5w5" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.051990 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/47d812b6-dea3-420d-8cc4-68ba78877940-signing-key\") pod \"service-ca-9c57cc56f-pwrdz\" (UID: \"47d812b6-dea3-420d-8cc4-68ba78877940\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwrdz" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.052781 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/576eeef9-fcf9-4db0-a0cc-4083e03277f6-secret-volume\") pod \"collect-profiles-29400915-zd9vl\" (UID: \"576eeef9-fcf9-4db0-a0cc-4083e03277f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.053959 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f110819e-9e33-4cf3-85b0-b92eaaaa223b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnr9c\" (UID: \"f110819e-9e33-4cf3-85b0-b92eaaaa223b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnr9c" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.054349 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fc143568-8b91-44a5-9fff-5890b6c29e0c-certs\") pod \"machine-config-server-rp8sr\" (UID: \"fc143568-8b91-44a5-9fff-5890b6c29e0c\") " pod="openshift-machine-config-operator/machine-config-server-rp8sr" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.054502 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e5355a57-e5cd-4f37-8a4a-d416e1584c4c-srv-cert\") pod \"olm-operator-6b444d44fb-j7vsb\" (UID: \"e5355a57-e5cd-4f37-8a4a-d416e1584c4c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.055035 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4685134b-7bce-4398-8492-74670ecda13e-serving-cert\") pod \"service-ca-operator-777779d784-9bx8l\" (UID: \"4685134b-7bce-4398-8492-74670ecda13e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bx8l" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.055640 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf8n6\" (UniqueName: \"kubernetes.io/projected/d00aa552-e700-4bec-9818-3084ac601a92-kube-api-access-tf8n6\") pod \"machine-api-operator-5694c8668f-l89zw\" (UID: \"d00aa552-e700-4bec-9818-3084ac601a92\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.062849 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5d6fe950-18fc-440b-ad82-014a34669117-srv-cert\") pod \"catalog-operator-68c6474976-9rfgk\" (UID: \"5d6fe950-18fc-440b-ad82-014a34669117\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.069528 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdp74\" (UniqueName: \"kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-kube-api-access-bdp74\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.090333 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnrqw\" (UniqueName: \"kubernetes.io/projected/2b80cda5-b011-4b59-869f-67ed66fe1a5a-kube-api-access-cnrqw\") pod \"cluster-image-registry-operator-dc59b4c8b-8hqj6\" (UID: \"2b80cda5-b011-4b59-869f-67ed66fe1a5a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.109414 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bece8729-924c-4595-88ee-ddcb1873b643-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7zfvk\" (UID: \"bece8729-924c-4595-88ee-ddcb1873b643\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.114346 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.139549 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-bound-sa-token\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.150843 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:09 crc kubenswrapper[5043]: E1125 07:18:09.151492 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:09.651452665 +0000 UTC m=+153.819648386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.154339 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qdzb\" (UniqueName: \"kubernetes.io/projected/8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb-kube-api-access-5qdzb\") pod \"etcd-operator-b45778765-fhv5q\" (UID: \"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.190660 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxgxl\" (UniqueName: \"kubernetes.io/projected/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-kube-api-access-fxgxl\") pod \"oauth-openshift-558db77b4-j7b8d\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.211779 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e72e27f3-de8a-4763-a3ee-cf8ea1531909-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-grmsk\" (UID: \"e72e27f3-de8a-4763-a3ee-cf8ea1531909\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grmsk" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.220858 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k"] Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.237333 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jt99\" (UniqueName: \"kubernetes.io/projected/dc0571cc-8090-4490-af8a-8199e9f983a9-kube-api-access-5jt99\") pod \"dns-operator-744455d44c-wz5hw\" (UID: \"dc0571cc-8090-4490-af8a-8199e9f983a9\") " pod="openshift-dns-operator/dns-operator-744455d44c-wz5hw" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.253172 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:09 crc kubenswrapper[5043]: E1125 07:18:09.253884 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:09.753861212 +0000 UTC m=+153.922057003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.255028 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ssxl\" (UniqueName: \"kubernetes.io/projected/bece8729-924c-4595-88ee-ddcb1873b643-kube-api-access-8ssxl\") pod \"ingress-operator-5b745b69d9-7zfvk\" (UID: \"bece8729-924c-4595-88ee-ddcb1873b643\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.265502 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.276142 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.283434 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88jlv\" (UniqueName: \"kubernetes.io/projected/9e0fe8fc-205b-4d60-8849-f624e26034ab-kube-api-access-88jlv\") pod \"openshift-apiserver-operator-796bbdcf4f-cw9mz\" (UID: \"9e0fe8fc-205b-4d60-8849-f624e26034ab\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cw9mz" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.298159 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cchz7" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.311804 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97sch\" (UniqueName: \"kubernetes.io/projected/b18ece39-f2f5-41f9-b2e1-79f9f880791b-kube-api-access-97sch\") pod \"console-f9d7485db-lbz4p\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.316563 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-729pp\" (UniqueName: \"kubernetes.io/projected/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-kube-api-access-729pp\") pod \"controller-manager-879f6c89f-dfsn8\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.325785 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6"] Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.335090 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcph2\" (UniqueName: \"kubernetes.io/projected/c3b4caa8-cc12-4739-8e10-d88cd9d4137d-kube-api-access-bcph2\") pod \"kube-storage-version-migrator-operator-b67b599dd-lt62z\" (UID: \"c3b4caa8-cc12-4739-8e10-d88cd9d4137d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lt62z" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.347117 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.353120 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srvzv\" (UniqueName: \"kubernetes.io/projected/f3a13dff-3c0c-4151-9514-42c40e8bc83f-kube-api-access-srvzv\") pod \"marketplace-operator-79b997595-hqtnq\" (UID: \"f3a13dff-3c0c-4151-9514-42c40e8bc83f\") " pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.353949 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.354371 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dft8v\" (UniqueName: \"kubernetes.io/projected/809f70bc-86b7-4712-b396-79f602e6684d-kube-api-access-dft8v\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.354443 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twglb\" (UniqueName: \"kubernetes.io/projected/881d759e-3077-4c93-b9de-86d8d960d3ca-kube-api-access-twglb\") pod \"cluster-samples-operator-665b6dd947-jrn4l\" (UID: \"881d759e-3077-4c93-b9de-86d8d960d3ca\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l" Nov 25 07:18:09 crc kubenswrapper[5043]: E1125 07:18:09.356116 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:09.856001813 +0000 UTC m=+154.024197534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.362177 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dft8v\" (UniqueName: \"kubernetes.io/projected/809f70bc-86b7-4712-b396-79f602e6684d-kube-api-access-dft8v\") pod \"authentication-operator-69f744f599-ptlq5\" (UID: \"809f70bc-86b7-4712-b396-79f602e6684d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.365310 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.370255 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twglb\" (UniqueName: \"kubernetes.io/projected/881d759e-3077-4c93-b9de-86d8d960d3ca-kube-api-access-twglb\") pod \"cluster-samples-operator-665b6dd947-jrn4l\" (UID: \"881d759e-3077-4c93-b9de-86d8d960d3ca\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.378501 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5htxj\" (UniqueName: \"kubernetes.io/projected/677ed5f9-0fae-4009-b50e-ede07073d251-kube-api-access-5htxj\") pod \"migrator-59844c95c7-x6762\" (UID: \"677ed5f9-0fae-4009-b50e-ede07073d251\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x6762" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.383190 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.395128 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjwkf\" (UniqueName: \"kubernetes.io/projected/8a79e2c5-003a-4929-8ba1-568e8ca6bb01-kube-api-access-xjwkf\") pod \"packageserver-d55dfcdfc-64rrj\" (UID: \"8a79e2c5-003a-4929-8ba1-568e8ca6bb01\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.415794 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1fcc66d-6926-48a3-9473-7539f6e50415-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7mf86\" (UID: \"d1fcc66d-6926-48a3-9473-7539f6e50415\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7mf86" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.421151 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cw9mz" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.433748 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9vt4\" (UniqueName: \"kubernetes.io/projected/38747ab5-dbdb-4c02-bb57-1a0f6f35f1b9-kube-api-access-t9vt4\") pod \"multus-admission-controller-857f4d67dd-kglrl\" (UID: \"38747ab5-dbdb-4c02-bb57-1a0f6f35f1b9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kglrl" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.438991 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.444251 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.450092 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grmsk" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.455327 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:09 crc kubenswrapper[5043]: E1125 07:18:09.455840 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:09.955826753 +0000 UTC m=+154.124022474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.473331 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7mf86" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.473717 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fphv\" (UniqueName: \"kubernetes.io/projected/47d812b6-dea3-420d-8cc4-68ba78877940-kube-api-access-7fphv\") pod \"service-ca-9c57cc56f-pwrdz\" (UID: \"47d812b6-dea3-420d-8cc4-68ba78877940\") " pod="openshift-service-ca/service-ca-9c57cc56f-pwrdz" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.478780 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z64w7\" (UniqueName: \"kubernetes.io/projected/fc143568-8b91-44a5-9fff-5890b6c29e0c-kube-api-access-z64w7\") pod \"machine-config-server-rp8sr\" (UID: \"fc143568-8b91-44a5-9fff-5890b6c29e0c\") " pod="openshift-machine-config-operator/machine-config-server-rp8sr" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.481460 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x6762" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.482028 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.489145 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.490529 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-l89zw"] Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.498032 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lt62z" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.508030 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wz5hw" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.508952 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxsw5\" (UniqueName: \"kubernetes.io/projected/e5355a57-e5cd-4f37-8a4a-d416e1584c4c-kube-api-access-sxsw5\") pod \"olm-operator-6b444d44fb-j7vsb\" (UID: \"e5355a57-e5cd-4f37-8a4a-d416e1584c4c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.514439 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cchz7"] Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.515378 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78z6p\" (UniqueName: \"kubernetes.io/projected/576eeef9-fcf9-4db0-a0cc-4083e03277f6-kube-api-access-78z6p\") pod \"collect-profiles-29400915-zd9vl\" (UID: \"576eeef9-fcf9-4db0-a0cc-4083e03277f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.533213 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kglrl" Nov 25 07:18:09 crc kubenswrapper[5043]: W1125 07:18:09.535354 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd00aa552_e700_4bec_9818_3084ac601a92.slice/crio-bda35adf9c10e1ca87015fbc77ffa330b7b46ae9ce8244c33c6bc93585747633 WatchSource:0}: Error finding container bda35adf9c10e1ca87015fbc77ffa330b7b46ae9ce8244c33c6bc93585747633: Status 404 returned error can't find the container with id bda35adf9c10e1ca87015fbc77ffa330b7b46ae9ce8244c33c6bc93585747633 Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.536076 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xqk2\" (UniqueName: \"kubernetes.io/projected/79fa92ce-d201-4f86-b3b8-3311def2e2cf-kube-api-access-2xqk2\") pod \"machine-config-controller-84d6567774-gskgn\" (UID: \"79fa92ce-d201-4f86-b3b8-3311def2e2cf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.547944 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.555305 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pwrdz" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.555976 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:09 crc kubenswrapper[5043]: E1125 07:18:09.556990 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:10.056973917 +0000 UTC m=+154.225169638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.560728 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2tsv\" (UniqueName: \"kubernetes.io/projected/5e696527-7a38-49ea-8517-9f286a7daff0-kube-api-access-d2tsv\") pod \"csi-hostpathplugin-b6j49\" (UID: \"5e696527-7a38-49ea-8517-9f286a7daff0\") " pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.574666 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjr9g\" (UniqueName: \"kubernetes.io/projected/bf4d2d7a-3f37-4c3c-8895-27aef30652af-kube-api-access-vjr9g\") pod \"ingress-canary-q4h7x\" (UID: \"bf4d2d7a-3f37-4c3c-8895-27aef30652af\") " pod="openshift-ingress-canary/ingress-canary-q4h7x" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.574986 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.575565 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.592228 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.592551 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs4fr\" (UniqueName: \"kubernetes.io/projected/96a3e8da-2eb3-472d-84f8-819566b5dbcf-kube-api-access-vs4fr\") pod \"dns-default-n7xt5\" (UID: \"96a3e8da-2eb3-472d-84f8-819566b5dbcf\") " pod="openshift-dns/dns-default-n7xt5" Nov 25 07:18:09 crc kubenswrapper[5043]: W1125 07:18:09.595920 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod060543b5_b830_412a_916a_0456db20f1ca.slice/crio-c731a80cf5ed209e1c34db3680c7a5596b7c93e3511f89a5d30f1ac98b80aa49 WatchSource:0}: Error finding container c731a80cf5ed209e1c34db3680c7a5596b7c93e3511f89a5d30f1ac98b80aa49: Status 404 returned error can't find the container with id c731a80cf5ed209e1c34db3680c7a5596b7c93e3511f89a5d30f1ac98b80aa49 Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.602864 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-b6j49" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.611212 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m926g\" (UniqueName: \"kubernetes.io/projected/525278dd-e0d0-44bb-ba09-c4e0b82a268f-kube-api-access-m926g\") pod \"package-server-manager-789f6589d5-8w2c6\" (UID: \"525278dd-e0d0-44bb-ba09-c4e0b82a268f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8w2c6" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.619591 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-n7xt5" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.629673 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q4h7x" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.630350 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d25605d1-1ee0-4c4b-a282-9986dace43f5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd5w5\" (UID: \"d25605d1-1ee0-4c4b-a282-9986dace43f5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd5w5" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.636859 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rp8sr" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.654255 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4crt\" (UniqueName: \"kubernetes.io/projected/f110819e-9e33-4cf3-85b0-b92eaaaa223b-kube-api-access-p4crt\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnr9c\" (UID: \"f110819e-9e33-4cf3-85b0-b92eaaaa223b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnr9c" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.658807 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:09 crc kubenswrapper[5043]: E1125 07:18:09.659189 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:10.159173189 +0000 UTC m=+154.327368910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.672586 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6kxh\" (UniqueName: \"kubernetes.io/projected/4685134b-7bce-4398-8492-74670ecda13e-kube-api-access-d6kxh\") pod \"service-ca-operator-777779d784-9bx8l\" (UID: \"4685134b-7bce-4398-8492-74670ecda13e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bx8l" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.689291 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7gjh\" (UniqueName: \"kubernetes.io/projected/5d6fe950-18fc-440b-ad82-014a34669117-kube-api-access-l7gjh\") pod \"catalog-operator-68c6474976-9rfgk\" (UID: \"5d6fe950-18fc-440b-ad82-014a34669117\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.706475 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j7b8d"] Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.725184 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px6d4\" (UniqueName: \"kubernetes.io/projected/9c13ef47-e6a6-421d-9ff3-7438aa746faf-kube-api-access-px6d4\") pod \"machine-config-operator-74547568cd-bsrgx\" (UID: \"9c13ef47-e6a6-421d-9ff3-7438aa746faf\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.759984 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:09 crc kubenswrapper[5043]: E1125 07:18:09.760191 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:10.26016245 +0000 UTC m=+154.428358171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.760457 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:09 crc kubenswrapper[5043]: E1125 07:18:09.760868 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:10.260851758 +0000 UTC m=+154.429047479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.767338 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd5w5" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.772750 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6" event={"ID":"2b80cda5-b011-4b59-869f-67ed66fe1a5a","Type":"ContainerStarted","Data":"404a3b9c0ae61b2959c0e24547be4e9eb717821b99b442ab493074507731f31d"} Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.774497 5043 generic.go:334] "Generic (PLEG): container finished" podID="81f790c4-a6b8-4bb7-8a46-107e7ad04689" containerID="44db58fc5fa53ca66c44881639552bd0df2e937ed835e2f5df132640ea1ae82f" exitCode=0 Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.774571 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jxz26" event={"ID":"81f790c4-a6b8-4bb7-8a46-107e7ad04689","Type":"ContainerDied","Data":"44db58fc5fa53ca66c44881639552bd0df2e937ed835e2f5df132640ea1ae82f"} Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.774988 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jxz26" event={"ID":"81f790c4-a6b8-4bb7-8a46-107e7ad04689","Type":"ContainerStarted","Data":"18953a0e7030a753f942b85be4d3fd8e9270eb3e81f7c9755bd7f91d8c3fb415"} Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.790025 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cchz7" event={"ID":"060543b5-b830-412a-916a-0456db20f1ca","Type":"ContainerStarted","Data":"c731a80cf5ed209e1c34db3680c7a5596b7c93e3511f89a5d30f1ac98b80aa49"} Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.791209 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" event={"ID":"d00aa552-e700-4bec-9818-3084ac601a92","Type":"ContainerStarted","Data":"bda35adf9c10e1ca87015fbc77ffa330b7b46ae9ce8244c33c6bc93585747633"} Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.793864 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnr9c" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.798884 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4q5v5" event={"ID":"ed1bbbdd-aa02-4472-867f-ef6f2c991728","Type":"ContainerStarted","Data":"4ed54ba9d00f26e455e12db7a372c283c92c47bc6164fd062eefcec184ea3a35"} Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.799194 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4q5v5" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.801683 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" event={"ID":"f75a6197-9de8-4720-af31-ebc12fe35e48","Type":"ContainerStarted","Data":"324421170ddd789aa79c6e2e6fa02d68ce42a2f4ea248cc93dec7e8ea7a49cab"} Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.801718 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" event={"ID":"f75a6197-9de8-4720-af31-ebc12fe35e48","Type":"ContainerStarted","Data":"d17f8caae9b53749642a3f8b973a57d05ee6e852b8251047d4dac746297f00df"} Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.802479 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" event={"ID":"ab8cbc1d-68e1-40c7-a280-4852974cf941","Type":"ContainerStarted","Data":"bdd279faed28630c151134e2f28c801331cde35f056f0b1385b71c057661fcac"} Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.803891 5043 generic.go:334] "Generic (PLEG): container finished" podID="f60df734-c1b3-4b19-9655-5d64097787f7" containerID="760bcac11af5a9ef16b8ac8f34b190b74e58ea8db649d0eb4dbfccb76871df1d" exitCode=0 Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.803940 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" event={"ID":"f60df734-c1b3-4b19-9655-5d64097787f7","Type":"ContainerDied","Data":"760bcac11af5a9ef16b8ac8f34b190b74e58ea8db649d0eb4dbfccb76871df1d"} Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.803955 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" event={"ID":"f60df734-c1b3-4b19-9655-5d64097787f7","Type":"ContainerStarted","Data":"82b05fca5f2d9a896ae97c8ec127a11cf2d22d42ebcde70134ed5bd034a56218"} Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.804032 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.819952 5043 patch_prober.go:28] interesting pod/downloads-7954f5f757-4q5v5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.820018 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4q5v5" podUID="ed1bbbdd-aa02-4472-867f-ef6f2c991728" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.820264 5043 generic.go:334] "Generic (PLEG): container finished" podID="a3195ecd-280e-475f-a3e5-7081b0db65f3" containerID="b62293d47fc97b4178d730c63199ab670103fd16bf600365d5ac01dff857853f" exitCode=0 Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.820367 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-66swt" event={"ID":"a3195ecd-280e-475f-a3e5-7081b0db65f3","Type":"ContainerDied","Data":"b62293d47fc97b4178d730c63199ab670103fd16bf600365d5ac01dff857853f"} Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.820399 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-66swt" event={"ID":"a3195ecd-280e-475f-a3e5-7081b0db65f3","Type":"ContainerStarted","Data":"8e9834e0e4de57219949c9118154bd8e87d5ec79815c2ed5b257d75f334a6c41"} Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.821650 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.823501 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cw9mz"] Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.823641 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8w2c6" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.825017 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grmsk"] Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.826321 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7c46k" event={"ID":"29f8c3a9-2f22-49c3-8423-537a8e3b819d","Type":"ContainerStarted","Data":"9c0114c0bf76a399db7141964cb6f38299aef73eab1ccf12db13ad4e5f0b0164"} Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.827377 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-7c46k" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.829144 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-6b4s4" event={"ID":"eab6e215-cf16-47d9-9049-9f6a0ed1239a","Type":"ContainerStarted","Data":"677e6e90e5dfc0039601421bf38c2a30316dcbb27476cbe68e9651d8f75026f7"} Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.837719 5043 patch_prober.go:28] interesting pod/console-operator-58897d9998-7c46k container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/readyz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.837772 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-7c46k" podUID="29f8c3a9-2f22-49c3-8423-537a8e3b819d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/readyz\": dial tcp 10.217.0.28:8443: connect: connection refused" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.840217 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.853890 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-lbz4p"] Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.861950 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:09 crc kubenswrapper[5043]: E1125 07:18:09.862188 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:10.362155607 +0000 UTC m=+154.530351338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.862313 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:09 crc kubenswrapper[5043]: E1125 07:18:09.863263 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:10.363253726 +0000 UTC m=+154.531449447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.911784 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bx8l" Nov 25 07:18:09 crc kubenswrapper[5043]: W1125 07:18:09.920292 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb18ece39_f2f5_41f9_b2e1_79f9f880791b.slice/crio-607355d516227667d8171f6cf8612dc83a3e65d39d798a730178de8c3f9e0e4a WatchSource:0}: Error finding container 607355d516227667d8171f6cf8612dc83a3e65d39d798a730178de8c3f9e0e4a: Status 404 returned error can't find the container with id 607355d516227667d8171f6cf8612dc83a3e65d39d798a730178de8c3f9e0e4a Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.965210 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:09 crc kubenswrapper[5043]: E1125 07:18:09.965634 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:10.465598282 +0000 UTC m=+154.633794003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:09 crc kubenswrapper[5043]: I1125 07:18:09.966756 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dfsn8"] Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.066271 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:10 crc kubenswrapper[5043]: E1125 07:18:10.067248 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:10.567195758 +0000 UTC m=+154.735391489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.120564 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kglrl"] Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.125250 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk"] Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.144642 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl"] Nov 25 07:18:10 crc kubenswrapper[5043]: W1125 07:18:10.154313 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc143568_8b91_44a5_9fff_5890b6c29e0c.slice/crio-4509e6f48e7702e1dbb46cd3658bbe099a02a59865a354d37443f3bbb9582121 WatchSource:0}: Error finding container 4509e6f48e7702e1dbb46cd3658bbe099a02a59865a354d37443f3bbb9582121: Status 404 returned error can't find the container with id 4509e6f48e7702e1dbb46cd3658bbe099a02a59865a354d37443f3bbb9582121 Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.177834 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:10 crc kubenswrapper[5043]: E1125 07:18:10.178059 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:10.678002616 +0000 UTC m=+154.846198337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.178186 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:10 crc kubenswrapper[5043]: E1125 07:18:10.178566 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:10.67855239 +0000 UTC m=+154.846748121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.278970 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:10 crc kubenswrapper[5043]: E1125 07:18:10.284773 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:10.784735917 +0000 UTC m=+154.952931638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.284879 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:10 crc kubenswrapper[5043]: E1125 07:18:10.285427 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:10.785416376 +0000 UTC m=+154.953612097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.359124 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l"] Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.390216 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:10 crc kubenswrapper[5043]: E1125 07:18:10.390323 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:10.890298908 +0000 UTC m=+155.058494629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.390464 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:10 crc kubenswrapper[5043]: E1125 07:18:10.390803 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:10.890792181 +0000 UTC m=+155.058987902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.491838 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:10 crc kubenswrapper[5043]: E1125 07:18:10.492233 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:10.992214763 +0000 UTC m=+155.160410494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.509307 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-7c46k" podStartSLOduration=127.509282071 podStartE2EDuration="2m7.509282071s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:10.505099281 +0000 UTC m=+154.673295002" watchObservedRunningTime="2025-11-25 07:18:10.509282071 +0000 UTC m=+154.677477792" Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.558929 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.577808 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:10 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:10 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:10 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.577867 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.594112 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:10 crc kubenswrapper[5043]: E1125 07:18:10.594919 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:11.094883638 +0000 UTC m=+155.263079359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.675479 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-4q5v5" podStartSLOduration=127.674846896 podStartE2EDuration="2m7.674846896s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:10.667682078 +0000 UTC m=+154.835877799" watchObservedRunningTime="2025-11-25 07:18:10.674846896 +0000 UTC m=+154.843042617" Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.695769 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:10 crc kubenswrapper[5043]: E1125 07:18:10.695994 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:11.19595872 +0000 UTC m=+155.364154451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.696070 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:10 crc kubenswrapper[5043]: E1125 07:18:10.696407 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:11.196388381 +0000 UTC m=+155.364584102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.797627 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:10 crc kubenswrapper[5043]: E1125 07:18:10.797979 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:11.297961747 +0000 UTC m=+155.466157468 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.857988 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" event={"ID":"4531175b-96a0-44e6-8cb8-c69bf9eb9d27","Type":"ContainerStarted","Data":"76ca41e4848c74159602c7af9da9868dd7427df5a6d32c680291ae1fbba4142c"} Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.860812 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kglrl" event={"ID":"38747ab5-dbdb-4c02-bb57-1a0f6f35f1b9","Type":"ContainerStarted","Data":"bd8d58ba4d34341fdd78bdd773fa801fe7d8f6f76ea6c8a8178d7ecb63ceb5c3"} Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.864317 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lbz4p" event={"ID":"b18ece39-f2f5-41f9-b2e1-79f9f880791b","Type":"ContainerStarted","Data":"607355d516227667d8171f6cf8612dc83a3e65d39d798a730178de8c3f9e0e4a"} Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.865150 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" event={"ID":"bece8729-924c-4595-88ee-ddcb1873b643","Type":"ContainerStarted","Data":"2c959eadd6b0fd545cec98c7d295ebe31aa72138930ad744606a1e37ffc1907d"} Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.866305 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" event={"ID":"d00aa552-e700-4bec-9818-3084ac601a92","Type":"ContainerStarted","Data":"59b44d0e5e82a4eb899376886f53b040ee393c70a9e691a4c7d8be2d6c352178"} Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.867228 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rp8sr" event={"ID":"fc143568-8b91-44a5-9fff-5890b6c29e0c","Type":"ContainerStarted","Data":"4509e6f48e7702e1dbb46cd3658bbe099a02a59865a354d37443f3bbb9582121"} Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.868738 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" event={"ID":"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc","Type":"ContainerStarted","Data":"81ee5b9c0e3b7721b9c49c79f1fe0089e945ea1fef0f68819458f8ad454c94e4"} Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.881448 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6" event={"ID":"2b80cda5-b011-4b59-869f-67ed66fe1a5a","Type":"ContainerStarted","Data":"7d8ecf7e99beac35f0bdf212987230630fb7d2bed054b669e4959f5b27eeb439"} Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.898833 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:10 crc kubenswrapper[5043]: E1125 07:18:10.900011 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:11.399996036 +0000 UTC m=+155.568191757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.905722 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" event={"ID":"ab8cbc1d-68e1-40c7-a280-4852974cf941","Type":"ContainerStarted","Data":"cd8e9fbc4157c0e880f25e61fc371273a6626679f110eb1e5222e41ce4e29d21"} Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.908676 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ptlq5"] Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.910560 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" event={"ID":"576eeef9-fcf9-4db0-a0cc-4083e03277f6","Type":"ContainerStarted","Data":"abd66b60581dccf7228eb6f9163fb433f4567f0aee3dbc04306395a516355e08"} Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.912079 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wz5hw"] Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.914305 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grmsk" event={"ID":"e72e27f3-de8a-4763-a3ee-cf8ea1531909","Type":"ContainerStarted","Data":"67fe9a4791b20a622dce802ed9760a152f4763ddf4f472c915b24d8747ca9572"} Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.916707 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cchz7" event={"ID":"060543b5-b830-412a-916a-0456db20f1ca","Type":"ContainerStarted","Data":"509f4c7304b94870d52a823bf65b558ec857c71a79c233bfe51d2daafd1b8d57"} Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.920366 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cw9mz" event={"ID":"9e0fe8fc-205b-4d60-8849-f624e26034ab","Type":"ContainerStarted","Data":"7023471f9255e8aef9ac1100d569ba27a0ff7cbd738c69d398c2f1f1a603de3d"} Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.921088 5043 patch_prober.go:28] interesting pod/console-operator-58897d9998-7c46k container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/readyz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.921128 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-7c46k" podUID="29f8c3a9-2f22-49c3-8423-537a8e3b819d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/readyz\": dial tcp 10.217.0.28:8443: connect: connection refused" Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.921975 5043 patch_prober.go:28] interesting pod/downloads-7954f5f757-4q5v5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.922005 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4q5v5" podUID="ed1bbbdd-aa02-4472-867f-ef6f2c991728" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 25 07:18:10 crc kubenswrapper[5043]: I1125 07:18:10.926742 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fhv5q"] Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.001956 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:11 crc kubenswrapper[5043]: E1125 07:18:11.009448 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:11.50611319 +0000 UTC m=+155.674308911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.103739 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:11 crc kubenswrapper[5043]: E1125 07:18:11.104053 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:11.60403773 +0000 UTC m=+155.772233461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.110658 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-6b4s4" podStartSLOduration=128.110640004 podStartE2EDuration="2m8.110640004s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:11.108422905 +0000 UTC m=+155.276618626" watchObservedRunningTime="2025-11-25 07:18:11.110640004 +0000 UTC m=+155.278835745" Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.204838 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:11 crc kubenswrapper[5043]: E1125 07:18:11.205127 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:11.705072971 +0000 UTC m=+155.873268702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.205451 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:11 crc kubenswrapper[5043]: E1125 07:18:11.205802 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:11.705790481 +0000 UTC m=+155.873986202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.242055 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnr9c"] Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.264648 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb"] Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.292955 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn"] Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.295230 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd5w5"] Nov 25 07:18:11 crc kubenswrapper[5043]: W1125 07:18:11.304098 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf110819e_9e33_4cf3_85b0_b92eaaaa223b.slice/crio-82c71bf61603864e7f7153bb94530dcbe55190eacac00fec39c17e9bfbcdd727 WatchSource:0}: Error finding container 82c71bf61603864e7f7153bb94530dcbe55190eacac00fec39c17e9bfbcdd727: Status 404 returned error can't find the container with id 82c71bf61603864e7f7153bb94530dcbe55190eacac00fec39c17e9bfbcdd727 Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.307785 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:11 crc kubenswrapper[5043]: E1125 07:18:11.308244 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:11.808229189 +0000 UTC m=+155.976424910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.311147 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q4h7x"] Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.311735 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk"] Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.313763 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-n7xt5"] Nov 25 07:18:11 crc kubenswrapper[5043]: W1125 07:18:11.315345 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5355a57_e5cd_4f37_8a4a_d416e1584c4c.slice/crio-5730f4682d58b25538576186ec06934de09ec631d14cf5e37ebcdd8054b553c5 WatchSource:0}: Error finding container 5730f4682d58b25538576186ec06934de09ec631d14cf5e37ebcdd8054b553c5: Status 404 returned error can't find the container with id 5730f4682d58b25538576186ec06934de09ec631d14cf5e37ebcdd8054b553c5 Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.324910 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9bx8l"] Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.328583 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lt62z"] Nov 25 07:18:11 crc kubenswrapper[5043]: W1125 07:18:11.330585 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d6fe950_18fc_440b_ad82_014a34669117.slice/crio-61fac1f7480993e8462cde540bf01ae30684a83de2e5339bf40f586aefeb350d WatchSource:0}: Error finding container 61fac1f7480993e8462cde540bf01ae30684a83de2e5339bf40f586aefeb350d: Status 404 returned error can't find the container with id 61fac1f7480993e8462cde540bf01ae30684a83de2e5339bf40f586aefeb350d Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.337689 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7mf86"] Nov 25 07:18:11 crc kubenswrapper[5043]: W1125 07:18:11.338047 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79fa92ce_d201_4f86_b3b8_3311def2e2cf.slice/crio-392be91d2e4d1f5a01127d0ae8df656cca8d02da111120951708a089cca67b8d WatchSource:0}: Error finding container 392be91d2e4d1f5a01127d0ae8df656cca8d02da111120951708a089cca67b8d: Status 404 returned error can't find the container with id 392be91d2e4d1f5a01127d0ae8df656cca8d02da111120951708a089cca67b8d Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.338774 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj"] Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.340867 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-b6j49"] Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.342932 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx"] Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.348455 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pwrdz"] Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.355165 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-x6762"] Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.357094 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hqtnq"] Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.359151 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8w2c6"] Nov 25 07:18:11 crc kubenswrapper[5043]: W1125 07:18:11.371066 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96a3e8da_2eb3_472d_84f8_819566b5dbcf.slice/crio-0333af229b2d985494ed2307098669b84a971bf7052ab9c23cecbbdaca3b9a9c WatchSource:0}: Error finding container 0333af229b2d985494ed2307098669b84a971bf7052ab9c23cecbbdaca3b9a9c: Status 404 returned error can't find the container with id 0333af229b2d985494ed2307098669b84a971bf7052ab9c23cecbbdaca3b9a9c Nov 25 07:18:11 crc kubenswrapper[5043]: W1125 07:18:11.399975 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47d812b6_dea3_420d_8cc4_68ba78877940.slice/crio-f588d39dd0a13b59384a7e2f5ebcfb6dc0041e60c0c12816bd80f4af571d1e88 WatchSource:0}: Error finding container f588d39dd0a13b59384a7e2f5ebcfb6dc0041e60c0c12816bd80f4af571d1e88: Status 404 returned error can't find the container with id f588d39dd0a13b59384a7e2f5ebcfb6dc0041e60c0c12816bd80f4af571d1e88 Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.409504 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:11 crc kubenswrapper[5043]: E1125 07:18:11.409964 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:11.909945979 +0000 UTC m=+156.078141700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.507339 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" podStartSLOduration=128.507319024 podStartE2EDuration="2m8.507319024s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:11.505993529 +0000 UTC m=+155.674189270" watchObservedRunningTime="2025-11-25 07:18:11.507319024 +0000 UTC m=+155.675514755" Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.514900 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:11 crc kubenswrapper[5043]: E1125 07:18:11.515389 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:12.015366786 +0000 UTC m=+156.183562507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.550626 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8hqj6" podStartSLOduration=128.55059666 podStartE2EDuration="2m8.55059666s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:11.549377138 +0000 UTC m=+155.717572859" watchObservedRunningTime="2025-11-25 07:18:11.55059666 +0000 UTC m=+155.718792381" Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.581343 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:11 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:11 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:11 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.581409 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.616227 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:11 crc kubenswrapper[5043]: E1125 07:18:11.616526 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:12.1165123 +0000 UTC m=+156.284708021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.668046 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cchz7" podStartSLOduration=128.668028242 podStartE2EDuration="2m8.668028242s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:11.631597686 +0000 UTC m=+155.799793407" watchObservedRunningTime="2025-11-25 07:18:11.668028242 +0000 UTC m=+155.836223963" Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.717327 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:11 crc kubenswrapper[5043]: E1125 07:18:11.717926 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:12.217904452 +0000 UTC m=+156.386100173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.718040 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:11 crc kubenswrapper[5043]: E1125 07:18:11.718377 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:12.218362773 +0000 UTC m=+156.386558494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.819161 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:11 crc kubenswrapper[5043]: E1125 07:18:11.819314 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:12.319288062 +0000 UTC m=+156.487483783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.819419 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:11 crc kubenswrapper[5043]: E1125 07:18:11.819814 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:12.319800346 +0000 UTC m=+156.487996077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.920350 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:11 crc kubenswrapper[5043]: E1125 07:18:11.920639 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:12.420614592 +0000 UTC m=+156.588810313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.920859 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:11 crc kubenswrapper[5043]: E1125 07:18:11.921267 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:12.421250698 +0000 UTC m=+156.589446409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.934848 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" event={"ID":"5d6fe950-18fc-440b-ad82-014a34669117","Type":"ContainerStarted","Data":"61fac1f7480993e8462cde540bf01ae30684a83de2e5339bf40f586aefeb350d"} Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.958955 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cw9mz" event={"ID":"9e0fe8fc-205b-4d60-8849-f624e26034ab","Type":"ContainerStarted","Data":"aa441948cd68f5f9a58b6d817cb2bb54cc52aac23841d5894864e3b7ff133c9b"} Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.963337 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lt62z" event={"ID":"c3b4caa8-cc12-4739-8e10-d88cd9d4137d","Type":"ContainerStarted","Data":"13e550d6ad529fb0ebbcbf7c58d769875c56ee0f27c34d1d8641812c846e4c6c"} Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.970558 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" event={"ID":"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc","Type":"ContainerStarted","Data":"21c18547d4cc2b8ca1874d3452249d660033deed4fb42e10d00ead1d19e43ab4"} Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.970959 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.973138 5043 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-dfsn8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.973171 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" podUID="e0f07e95-4043-41c1-9f91-b79a6f7b9bbc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.976072 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wz5hw" event={"ID":"dc0571cc-8090-4490-af8a-8199e9f983a9","Type":"ContainerStarted","Data":"a39226193b437ad951b25d73ed2a3b5e24c3176557c0960d42ccc53caadc6f8e"} Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.976118 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wz5hw" event={"ID":"dc0571cc-8090-4490-af8a-8199e9f983a9","Type":"ContainerStarted","Data":"5e125b951745c9f9c6e3c8b2d86ed0ba68c1d09451ab564f7721ca0902311509"} Nov 25 07:18:11 crc kubenswrapper[5043]: I1125 07:18:11.997934 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bx8l" event={"ID":"4685134b-7bce-4398-8492-74670ecda13e","Type":"ContainerStarted","Data":"db639e8ddce6198f420a849e77d8087269cc96026cd04e6fbdb7ca5ce4b26e12"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.002922 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jxz26" event={"ID":"81f790c4-a6b8-4bb7-8a46-107e7ad04689","Type":"ContainerStarted","Data":"5f51a49fa38724f514b4b37101d9a7af4b1f0623c26ed7df478b723f5fb80c8a"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.005392 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnr9c" event={"ID":"f110819e-9e33-4cf3-85b0-b92eaaaa223b","Type":"ContainerStarted","Data":"f1ac23d0234a0ba2091af336f86787bf53e2c74e835ae76defcd6ace7fce3415"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.005492 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnr9c" event={"ID":"f110819e-9e33-4cf3-85b0-b92eaaaa223b","Type":"ContainerStarted","Data":"82c71bf61603864e7f7153bb94530dcbe55190eacac00fec39c17e9bfbcdd727"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.007008 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8w2c6" event={"ID":"525278dd-e0d0-44bb-ba09-c4e0b82a268f","Type":"ContainerStarted","Data":"71dba3d2970f63629aa7850b82f4230cb7e168c93649c7ebab3e55af218aee5c"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.009791 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lbz4p" event={"ID":"b18ece39-f2f5-41f9-b2e1-79f9f880791b","Type":"ContainerStarted","Data":"33164da69808a1ed3c9ab89d900bcf6b7cd2e1640f5a2602f4f4c9cc2eaae19a"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.021645 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" event={"ID":"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb","Type":"ContainerStarted","Data":"3facfc01e3925c842ea7d07aa3b9bb531127882c52f87fa1d1b3f079ab4f0201"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.021695 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" event={"ID":"8a376e0c-0ce8-4b9b-a2d8-0fb246cd10cb","Type":"ContainerStarted","Data":"10ca3505431a726c2640b6efc0c8011dce8ab4b745a5b0c0bec83cba9fe8e184"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.023645 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" event={"ID":"809f70bc-86b7-4712-b396-79f602e6684d","Type":"ContainerStarted","Data":"1e7a0f222e3dce36f2271936042fd9844e8d7914177439a3ae2076bf311e047b"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.023674 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" event={"ID":"809f70bc-86b7-4712-b396-79f602e6684d","Type":"ContainerStarted","Data":"dae62c1944363148a12e8fd554fe91f8ebdb2939546a62b82ab0709794737817"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.023969 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:12 crc kubenswrapper[5043]: E1125 07:18:12.024102 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:12.524083077 +0000 UTC m=+156.692278798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.024317 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.026170 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn" event={"ID":"79fa92ce-d201-4f86-b3b8-3311def2e2cf","Type":"ContainerStarted","Data":"392be91d2e4d1f5a01127d0ae8df656cca8d02da111120951708a089cca67b8d"} Nov 25 07:18:12 crc kubenswrapper[5043]: E1125 07:18:12.028391 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:12.528367419 +0000 UTC m=+156.696563140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.031203 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q4h7x" event={"ID":"bf4d2d7a-3f37-4c3c-8895-27aef30652af","Type":"ContainerStarted","Data":"645b18c2f4b83a7ea5002e16c06787e9b36c8af1b552834cb18fc022e87922cb"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.032774 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" event={"ID":"8a79e2c5-003a-4929-8ba1-568e8ca6bb01","Type":"ContainerStarted","Data":"d85524a39edda0bddf76f93ae815dc3760648526a39ad62ac256e38b1b12b025"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.034197 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rp8sr" event={"ID":"fc143568-8b91-44a5-9fff-5890b6c29e0c","Type":"ContainerStarted","Data":"cf574f9e7eca94a53f1ab421376b5beec30c6ade6afdccb15349accd00974356"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.036329 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" event={"ID":"576eeef9-fcf9-4db0-a0cc-4083e03277f6","Type":"ContainerStarted","Data":"e51fddbfd0a9b8cb2d63a12d137aceda2471ab2b859c1b5387bf24f9a2713be1"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.040085 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l" event={"ID":"881d759e-3077-4c93-b9de-86d8d960d3ca","Type":"ContainerStarted","Data":"1d224fe5e6d0796f58d4815e9378f2741e04e30e1cc9bd22b8d8d69738b7f586"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.040128 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l" event={"ID":"881d759e-3077-4c93-b9de-86d8d960d3ca","Type":"ContainerStarted","Data":"7b7771ec0b9dbfeb9e3963eb8f6f4dd6889671a36aee202913de9b8f8faa7b17"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.042219 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-66swt" event={"ID":"a3195ecd-280e-475f-a3e5-7081b0db65f3","Type":"ContainerStarted","Data":"508f7bd147ab431ca7b6139759d2451bb29af43ca5b01af6bb86765119ed38a8"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.042405 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-66swt" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.044133 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" event={"ID":"d00aa552-e700-4bec-9818-3084ac601a92","Type":"ContainerStarted","Data":"bf82b8a546e01b765613aaf8e7da565f9711455b82f5468a08a3bc2a378b64f5"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.046881 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kglrl" event={"ID":"38747ab5-dbdb-4c02-bb57-1a0f6f35f1b9","Type":"ContainerStarted","Data":"06c3818d983ad61745682597bc06f3c8acb04737889029e2c88ed0a85f7f6292"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.047911 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" event={"ID":"f3a13dff-3c0c-4151-9514-42c40e8bc83f","Type":"ContainerStarted","Data":"375c5082eca5dca8a989aec2ce8e6c23d9e68a959a6f6706b4bfa7900499c96e"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.050018 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" event={"ID":"e5355a57-e5cd-4f37-8a4a-d416e1584c4c","Type":"ContainerStarted","Data":"a09624947ff2d08c10ea5717a892c8c9415dfeee4322a0a03df4207edaac0d7e"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.050069 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" event={"ID":"e5355a57-e5cd-4f37-8a4a-d416e1584c4c","Type":"ContainerStarted","Data":"5730f4682d58b25538576186ec06934de09ec631d14cf5e37ebcdd8054b553c5"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.050326 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.052593 5043 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-j7vsb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.052660 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" podUID="e5355a57-e5cd-4f37-8a4a-d416e1584c4c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.052867 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" event={"ID":"4531175b-96a0-44e6-8cb8-c69bf9eb9d27","Type":"ContainerStarted","Data":"ec796cf82be3ccafb1c8d6103c549d4bd8b62d49ccfca2941e5cfcf7828aab07"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.052908 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.056234 5043 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-j7b8d container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.056259 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" podUID="4531175b-96a0-44e6-8cb8-c69bf9eb9d27" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.060172 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd5w5" event={"ID":"d25605d1-1ee0-4c4b-a282-9986dace43f5","Type":"ContainerStarted","Data":"2ec54f7891f5b1bce3719efdfa9704d757a24ba968d076db5bcdbf43a8e392b6"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.063675 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7mf86" event={"ID":"d1fcc66d-6926-48a3-9473-7539f6e50415","Type":"ContainerStarted","Data":"f1b64d1234877398c330e963a8119f462ddc9973772f8f6f47e512917439c45b"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.073283 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pwrdz" event={"ID":"47d812b6-dea3-420d-8cc4-68ba78877940","Type":"ContainerStarted","Data":"f588d39dd0a13b59384a7e2f5ebcfb6dc0041e60c0c12816bd80f4af571d1e88"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.077055 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x6762" event={"ID":"677ed5f9-0fae-4009-b50e-ede07073d251","Type":"ContainerStarted","Data":"622ec05d8d7b9fe682ea09e0062f9fefd885b3824239ae75d116d2282fd27456"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.130663 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n7xt5" event={"ID":"96a3e8da-2eb3-472d-84f8-819566b5dbcf","Type":"ContainerStarted","Data":"0333af229b2d985494ed2307098669b84a971bf7052ab9c23cecbbdaca3b9a9c"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.131814 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:12 crc kubenswrapper[5043]: E1125 07:18:12.132850 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:12.632826741 +0000 UTC m=+156.801022472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.139662 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b6j49" event={"ID":"5e696527-7a38-49ea-8517-9f286a7daff0","Type":"ContainerStarted","Data":"624869b6a734d531d6daa63872dd26b49bc58f1bc9ec165e07458551dd966012"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.149145 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" event={"ID":"ab8cbc1d-68e1-40c7-a280-4852974cf941","Type":"ContainerStarted","Data":"a6434521680467a142842509ab79b13295d2b097b141829f21ee54a38223b7aa"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.182810 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" event={"ID":"f60df734-c1b3-4b19-9655-5d64097787f7","Type":"ContainerStarted","Data":"6f10bd5559e2539c465db30111b83803575be968e2b93c7cf140793eae44c6fa"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.203710 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grmsk" event={"ID":"e72e27f3-de8a-4763-a3ee-cf8ea1531909","Type":"ContainerStarted","Data":"39863fd9ff92b12813528b4e27249b12e5bf23b4b6443981ca3a0a5eabc77d43"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.209425 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" event={"ID":"bece8729-924c-4595-88ee-ddcb1873b643","Type":"ContainerStarted","Data":"ace22579c7186b16a172515ed8079786caab50afc28c30940a594d7b2735f216"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.209477 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" event={"ID":"bece8729-924c-4595-88ee-ddcb1873b643","Type":"ContainerStarted","Data":"7214511d71ba5e2329e30ea2cff4c44a937af8fac8a9c9aa5940e65f4775b229"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.212517 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" event={"ID":"9c13ef47-e6a6-421d-9ff3-7438aa746faf","Type":"ContainerStarted","Data":"c4dd09b53f196d2abb8d62e818b7720d67fd9e61a4f7ad9013e476782426b6ae"} Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.234247 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:12 crc kubenswrapper[5043]: E1125 07:18:12.235420 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:12.735404613 +0000 UTC m=+156.903600334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.337106 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:12 crc kubenswrapper[5043]: E1125 07:18:12.337378 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:12.837359089 +0000 UTC m=+157.005554820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.337470 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:12 crc kubenswrapper[5043]: E1125 07:18:12.338337 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:12.838328885 +0000 UTC m=+157.006524606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.392941 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" podStartSLOduration=129.392916388 podStartE2EDuration="2m9.392916388s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:12.354647683 +0000 UTC m=+156.522843414" watchObservedRunningTime="2025-11-25 07:18:12.392916388 +0000 UTC m=+156.561112109" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.394093 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-ptlq5" podStartSLOduration=130.394082688 podStartE2EDuration="2m10.394082688s" podCreationTimestamp="2025-11-25 07:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:12.389832117 +0000 UTC m=+156.558027838" watchObservedRunningTime="2025-11-25 07:18:12.394082688 +0000 UTC m=+156.562278409" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.433756 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7zfvk" podStartSLOduration=129.433736859 podStartE2EDuration="2m9.433736859s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:12.432284631 +0000 UTC m=+156.600480362" watchObservedRunningTime="2025-11-25 07:18:12.433736859 +0000 UTC m=+156.601932580" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.438302 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:12 crc kubenswrapper[5043]: E1125 07:18:12.438552 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:12.938538074 +0000 UTC m=+157.106733795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.483907 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-7c46k" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.532736 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" podStartSLOduration=130.532719157 podStartE2EDuration="2m10.532719157s" podCreationTimestamp="2025-11-25 07:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:12.532280665 +0000 UTC m=+156.700476396" watchObservedRunningTime="2025-11-25 07:18:12.532719157 +0000 UTC m=+156.700914878" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.533114 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" podStartSLOduration=129.533108267 podStartE2EDuration="2m9.533108267s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:12.473464011 +0000 UTC m=+156.641659732" watchObservedRunningTime="2025-11-25 07:18:12.533108267 +0000 UTC m=+156.701303988" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.539789 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:12 crc kubenswrapper[5043]: E1125 07:18:12.540415 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:13.040397568 +0000 UTC m=+157.208593289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.565468 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" podStartSLOduration=129.565451355 podStartE2EDuration="2m9.565451355s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:12.56256416 +0000 UTC m=+156.730759881" watchObservedRunningTime="2025-11-25 07:18:12.565451355 +0000 UTC m=+156.733647077" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.566783 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:12 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:12 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:12 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.566825 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.644435 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:12 crc kubenswrapper[5043]: E1125 07:18:12.645069 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:13.145050455 +0000 UTC m=+157.313246176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.665547 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-fhv5q" podStartSLOduration=129.665530063 podStartE2EDuration="2m9.665530063s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:12.607472428 +0000 UTC m=+156.775668149" watchObservedRunningTime="2025-11-25 07:18:12.665530063 +0000 UTC m=+156.833725784" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.666672 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kkx6" podStartSLOduration=130.666667492 podStartE2EDuration="2m10.666667492s" podCreationTimestamp="2025-11-25 07:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:12.664420253 +0000 UTC m=+156.832616004" watchObservedRunningTime="2025-11-25 07:18:12.666667492 +0000 UTC m=+156.834863213" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.737984 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-lbz4p" podStartSLOduration=129.737965113 podStartE2EDuration="2m9.737965113s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:12.736988328 +0000 UTC m=+156.905184049" watchObservedRunningTime="2025-11-25 07:18:12.737965113 +0000 UTC m=+156.906160844" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.746075 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:12 crc kubenswrapper[5043]: E1125 07:18:12.746410 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:13.246398974 +0000 UTC m=+157.414594685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.847505 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:12 crc kubenswrapper[5043]: E1125 07:18:12.848098 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:13.348083083 +0000 UTC m=+157.516278804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.849656 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grmsk" podStartSLOduration=129.849636564 podStartE2EDuration="2m9.849636564s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:12.847245121 +0000 UTC m=+157.015440842" watchObservedRunningTime="2025-11-25 07:18:12.849636564 +0000 UTC m=+157.017832305" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.874744 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-66swt" podStartSLOduration=129.874729783 podStartE2EDuration="2m9.874729783s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:12.871847007 +0000 UTC m=+157.040042728" watchObservedRunningTime="2025-11-25 07:18:12.874729783 +0000 UTC m=+157.042925504" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.923115 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-rp8sr" podStartSLOduration=6.923099601 podStartE2EDuration="6.923099601s" podCreationTimestamp="2025-11-25 07:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:12.901629688 +0000 UTC m=+157.069825409" watchObservedRunningTime="2025-11-25 07:18:12.923099601 +0000 UTC m=+157.091295322" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.949157 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:12 crc kubenswrapper[5043]: E1125 07:18:12.949459 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:13.449446903 +0000 UTC m=+157.617642624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.968189 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" podStartSLOduration=129.968171785 podStartE2EDuration="2m9.968171785s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:12.966574914 +0000 UTC m=+157.134770635" watchObservedRunningTime="2025-11-25 07:18:12.968171785 +0000 UTC m=+157.136367506" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.969087 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-l89zw" podStartSLOduration=129.969081579 podStartE2EDuration="2m9.969081579s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:12.927401185 +0000 UTC m=+157.095596906" watchObservedRunningTime="2025-11-25 07:18:12.969081579 +0000 UTC m=+157.137277300" Nov 25 07:18:12 crc kubenswrapper[5043]: I1125 07:18:12.998301 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cw9mz" podStartSLOduration=130.998280745 podStartE2EDuration="2m10.998280745s" podCreationTimestamp="2025-11-25 07:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:12.996300123 +0000 UTC m=+157.164495844" watchObservedRunningTime="2025-11-25 07:18:12.998280745 +0000 UTC m=+157.166476466" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.053092 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:13 crc kubenswrapper[5043]: E1125 07:18:13.053499 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:13.553481744 +0000 UTC m=+157.721677465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.055650 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnr9c" podStartSLOduration=130.055635251 podStartE2EDuration="2m10.055635251s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.023586319 +0000 UTC m=+157.191782050" watchObservedRunningTime="2025-11-25 07:18:13.055635251 +0000 UTC m=+157.223830972" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.154985 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:13 crc kubenswrapper[5043]: E1125 07:18:13.155275 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:13.655263915 +0000 UTC m=+157.823459636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.225412 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd5w5" event={"ID":"d25605d1-1ee0-4c4b-a282-9986dace43f5","Type":"ContainerStarted","Data":"01ee84f8ce1f3c6e57d26e8300d2e4da6cc1115832ef2ac8b8d185872365b05d"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.228407 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pwrdz" event={"ID":"47d812b6-dea3-420d-8cc4-68ba78877940","Type":"ContainerStarted","Data":"d260a0b9ee056133616202a17483e32632e9ac7b6552b502bd935349b9f89246"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.230957 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n7xt5" event={"ID":"96a3e8da-2eb3-472d-84f8-819566b5dbcf","Type":"ContainerStarted","Data":"485538d4023a5cbbc36f981eb39949e378aa71ecadf3536fa8db134fef90b259"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.231003 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n7xt5" event={"ID":"96a3e8da-2eb3-472d-84f8-819566b5dbcf","Type":"ContainerStarted","Data":"9fb1e3918acf9cab8e7cfe9ede41b10f65b00d6d211b3b383ac02b7da0062631"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.231644 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-n7xt5" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.233117 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" event={"ID":"5d6fe950-18fc-440b-ad82-014a34669117","Type":"ContainerStarted","Data":"6f7474f135ab418d996a83bb36c2ae56cc6826f8aba87b5b65992048b33eb138"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.234009 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.235352 5043 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9rfgk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.235391 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" podUID="5d6fe950-18fc-440b-ad82-014a34669117" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.237066 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jxz26" event={"ID":"81f790c4-a6b8-4bb7-8a46-107e7ad04689","Type":"ContainerStarted","Data":"1292b400a87526010c86091ac30c88b979c3a8562d829edff69fa0dc19e0eb9a"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.239948 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l" event={"ID":"881d759e-3077-4c93-b9de-86d8d960d3ca","Type":"ContainerStarted","Data":"1031beda72e1ae61aeaf8bb583f4670dd348d9891066821b271f55dac16573a1"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.242160 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x6762" event={"ID":"677ed5f9-0fae-4009-b50e-ede07073d251","Type":"ContainerStarted","Data":"0fe0fafbfd6a24085ae6030976657a8a0f03b3fbcdb81e8b1e6da30936ea56a0"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.242224 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x6762" event={"ID":"677ed5f9-0fae-4009-b50e-ede07073d251","Type":"ContainerStarted","Data":"7a3e54c1db429715e9baadb6e113490fdff4f8903ed9c2788e61b4867868b662"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.243666 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q4h7x" event={"ID":"bf4d2d7a-3f37-4c3c-8895-27aef30652af","Type":"ContainerStarted","Data":"178ecbd8379424edc3abe4e1cf0f00abdfcb8c82fa20daf02043249308400893"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.249302 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wz5hw" event={"ID":"dc0571cc-8090-4490-af8a-8199e9f983a9","Type":"ContainerStarted","Data":"8ac7a4c81a8d519b88b81a0715357697bbdc4eb563ee8c108dc2d6c002c5d847"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.252942 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8w2c6" event={"ID":"525278dd-e0d0-44bb-ba09-c4e0b82a268f","Type":"ContainerStarted","Data":"f6f099dfd67d86aa6359ee701cbe57d500c8d6aa2686d057766a67eedfc1876e"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.253045 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8w2c6" event={"ID":"525278dd-e0d0-44bb-ba09-c4e0b82a268f","Type":"ContainerStarted","Data":"9854473ac1b4ae16392e106959e0bfd83dd5fada929c683ec8db1a86f13c2877"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.253064 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8w2c6" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.255125 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kglrl" event={"ID":"38747ab5-dbdb-4c02-bb57-1a0f6f35f1b9","Type":"ContainerStarted","Data":"7c8e4234f077513c86db7b9066786c59dcb99e02fd57ac30e85f0f148645d306"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.256795 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:13 crc kubenswrapper[5043]: E1125 07:18:13.257237 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:13.757213671 +0000 UTC m=+157.925409392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.265887 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" event={"ID":"8a79e2c5-003a-4929-8ba1-568e8ca6bb01","Type":"ContainerStarted","Data":"cb88ffab6d02a969967cb9118122277fb315e0a5e715fe7ddeffa44997992d7e"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.266495 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.267399 5043 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-64rrj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.267444 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" podUID="8a79e2c5-003a-4929-8ba1-568e8ca6bb01" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.270638 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd5w5" podStartSLOduration=130.270622613 podStartE2EDuration="2m10.270622613s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.248238776 +0000 UTC m=+157.416434517" watchObservedRunningTime="2025-11-25 07:18:13.270622613 +0000 UTC m=+157.438818344" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.277013 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bx8l" event={"ID":"4685134b-7bce-4398-8492-74670ecda13e","Type":"ContainerStarted","Data":"5d40b74c363eafa7e5c4ec318f71b9bc32e41263898ab6fdd05e9184d7803183"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.279918 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn" event={"ID":"79fa92ce-d201-4f86-b3b8-3311def2e2cf","Type":"ContainerStarted","Data":"7164ce70b3ad6b46a60fb210907e63ff56c57b6c22b0e315809110cd75943131"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.279953 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn" event={"ID":"79fa92ce-d201-4f86-b3b8-3311def2e2cf","Type":"ContainerStarted","Data":"74952056a04b1b7e866dfaf7ca75ccac3caa25619fdf06c75e9e8845dc09d026"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.283173 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lt62z" event={"ID":"c3b4caa8-cc12-4739-8e10-d88cd9d4137d","Type":"ContainerStarted","Data":"62c7e518bcf58b4622dcd8dc6583a19ddeb4a2a5495d032b629b55386fe658d1"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.284832 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7mf86" event={"ID":"d1fcc66d-6926-48a3-9473-7539f6e50415","Type":"ContainerStarted","Data":"83515b622e2920b1e97dd650c82dd88bb385fc065a6788dc597eb189190fbd2f"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.286582 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" event={"ID":"9c13ef47-e6a6-421d-9ff3-7438aa746faf","Type":"ContainerStarted","Data":"4d160385098b52cd7b07486220e463edc5e922ab8aea4b7060b6c8d1835ca72b"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.286621 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" event={"ID":"9c13ef47-e6a6-421d-9ff3-7438aa746faf","Type":"ContainerStarted","Data":"47dd2b0ce5cef2a76a4c4083d4447f21a9d3f736383d5d06457ccddbe9592b4f"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.300887 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" event={"ID":"f3a13dff-3c0c-4151-9514-42c40e8bc83f","Type":"ContainerStarted","Data":"caaf42d6054a0a832bfd7a7eb6487a1897df6360659e66a2c4cf16f8cbab3bb6"} Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.301292 5043 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-j7vsb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.301339 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" podUID="e5355a57-e5cd-4f37-8a4a-d416e1584c4c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.302041 5043 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-j7b8d container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.302093 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" podUID="4531175b-96a0-44e6-8cb8-c69bf9eb9d27" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.302335 5043 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-dfsn8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.302365 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" podUID="e0f07e95-4043-41c1-9f91-b79a6f7b9bbc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.302593 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.305295 5043 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hqtnq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.305341 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" podUID="f3a13dff-3c0c-4151-9514-42c40e8bc83f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.310089 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-pwrdz" podStartSLOduration=130.310066968 podStartE2EDuration="2m10.310066968s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.272511672 +0000 UTC m=+157.440707393" watchObservedRunningTime="2025-11-25 07:18:13.310066968 +0000 UTC m=+157.478262689" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.310829 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" podStartSLOduration=130.310822108 podStartE2EDuration="2m10.310822108s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.297264302 +0000 UTC m=+157.465460023" watchObservedRunningTime="2025-11-25 07:18:13.310822108 +0000 UTC m=+157.479017829" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.323486 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-q4h7x" podStartSLOduration=7.323468759 podStartE2EDuration="7.323468759s" podCreationTimestamp="2025-11-25 07:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.320819021 +0000 UTC m=+157.489014762" watchObservedRunningTime="2025-11-25 07:18:13.323468759 +0000 UTC m=+157.491664480" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.339854 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x6762" podStartSLOduration=130.339831849 podStartE2EDuration="2m10.339831849s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.338371871 +0000 UTC m=+157.506567592" watchObservedRunningTime="2025-11-25 07:18:13.339831849 +0000 UTC m=+157.508027570" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.362837 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:13 crc kubenswrapper[5043]: E1125 07:18:13.398211 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:13.898195141 +0000 UTC m=+158.066390862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.400180 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-n7xt5" podStartSLOduration=7.400160232 podStartE2EDuration="7.400160232s" podCreationTimestamp="2025-11-25 07:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.398845928 +0000 UTC m=+157.567041659" watchObservedRunningTime="2025-11-25 07:18:13.400160232 +0000 UTC m=+157.568355953" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.421315 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jxz26" podStartSLOduration=130.421294677 podStartE2EDuration="2m10.421294677s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.418305219 +0000 UTC m=+157.586500940" watchObservedRunningTime="2025-11-25 07:18:13.421294677 +0000 UTC m=+157.589490398" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.434502 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrn4l" podStartSLOduration=130.434483784 podStartE2EDuration="2m10.434483784s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.429899603 +0000 UTC m=+157.598095344" watchObservedRunningTime="2025-11-25 07:18:13.434483784 +0000 UTC m=+157.602679505" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.450702 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9bx8l" podStartSLOduration=130.450683488 podStartE2EDuration="2m10.450683488s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.447360622 +0000 UTC m=+157.615556343" watchObservedRunningTime="2025-11-25 07:18:13.450683488 +0000 UTC m=+157.618879209" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.465448 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-wz5hw" podStartSLOduration=130.465430516 podStartE2EDuration="2m10.465430516s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.464211144 +0000 UTC m=+157.632406865" watchObservedRunningTime="2025-11-25 07:18:13.465430516 +0000 UTC m=+157.633626237" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.465943 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:13 crc kubenswrapper[5043]: E1125 07:18:13.466375 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:13.966348569 +0000 UTC m=+158.134544290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.490321 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lt62z" podStartSLOduration=130.490300059 podStartE2EDuration="2m10.490300059s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.488096101 +0000 UTC m=+157.656291822" watchObservedRunningTime="2025-11-25 07:18:13.490300059 +0000 UTC m=+157.658495780" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.520270 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-kglrl" podStartSLOduration=130.520250984 podStartE2EDuration="2m10.520250984s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.518650583 +0000 UTC m=+157.686846324" watchObservedRunningTime="2025-11-25 07:18:13.520250984 +0000 UTC m=+157.688446695" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.562565 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:13 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:13 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:13 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.562634 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.567416 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:13 crc kubenswrapper[5043]: E1125 07:18:13.567858 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:14.067840174 +0000 UTC m=+158.236035955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.569806 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7mf86" podStartSLOduration=130.569790284 podStartE2EDuration="2m10.569790284s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.568309656 +0000 UTC m=+157.736505377" watchObservedRunningTime="2025-11-25 07:18:13.569790284 +0000 UTC m=+157.737986005" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.571358 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" podStartSLOduration=130.571347645 podStartE2EDuration="2m10.571347645s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.546172165 +0000 UTC m=+157.714367896" watchObservedRunningTime="2025-11-25 07:18:13.571347645 +0000 UTC m=+157.739543386" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.593090 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gskgn" podStartSLOduration=130.593074106 podStartE2EDuration="2m10.593074106s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.592217433 +0000 UTC m=+157.760413174" watchObservedRunningTime="2025-11-25 07:18:13.593074106 +0000 UTC m=+157.761269827" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.630594 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bsrgx" podStartSLOduration=130.63057545 podStartE2EDuration="2m10.63057545s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.63019085 +0000 UTC m=+157.798386571" watchObservedRunningTime="2025-11-25 07:18:13.63057545 +0000 UTC m=+157.798771171" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.658049 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.658093 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.659206 5043 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-mbwfq container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.659274 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" podUID="f60df734-c1b3-4b19-9655-5d64097787f7" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.668160 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:13 crc kubenswrapper[5043]: E1125 07:18:13.668284 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:14.168262529 +0000 UTC m=+158.336458250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.668413 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:13 crc kubenswrapper[5043]: E1125 07:18:13.668733 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:14.168721921 +0000 UTC m=+158.336917642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.686724 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8w2c6" podStartSLOduration=130.686706673 podStartE2EDuration="2m10.686706673s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.684965178 +0000 UTC m=+157.853160899" watchObservedRunningTime="2025-11-25 07:18:13.686706673 +0000 UTC m=+157.854902394" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.711970 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" podStartSLOduration=130.711953936 podStartE2EDuration="2m10.711953936s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:13.708912506 +0000 UTC m=+157.877108247" watchObservedRunningTime="2025-11-25 07:18:13.711953936 +0000 UTC m=+157.880149657" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.716421 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.716467 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.717411 5043 patch_prober.go:28] interesting pod/apiserver-76f77b778f-jxz26 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.29:8443/livez\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.717457 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-jxz26" podUID="81f790c4-a6b8-4bb7-8a46-107e7ad04689" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.29:8443/livez\": dial tcp 10.217.0.29:8443: connect: connection refused" Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.769346 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:13 crc kubenswrapper[5043]: E1125 07:18:13.769587 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:14.269555968 +0000 UTC m=+158.437751689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.770115 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:13 crc kubenswrapper[5043]: E1125 07:18:13.770480 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:14.270469452 +0000 UTC m=+158.438665173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.870818 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:13 crc kubenswrapper[5043]: E1125 07:18:13.871169 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:14.371155524 +0000 UTC m=+158.539351245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:13 crc kubenswrapper[5043]: I1125 07:18:13.972713 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:13 crc kubenswrapper[5043]: E1125 07:18:13.973040 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:14.473027267 +0000 UTC m=+158.641222988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.073592 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:14 crc kubenswrapper[5043]: E1125 07:18:14.073921 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:14.573906785 +0000 UTC m=+158.742102506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.175116 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:14 crc kubenswrapper[5043]: E1125 07:18:14.175431 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:14.675418579 +0000 UTC m=+158.843614300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.276561 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:14 crc kubenswrapper[5043]: E1125 07:18:14.276903 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:14.776873522 +0000 UTC m=+158.945069253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.305466 5043 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9rfgk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.305522 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" podUID="5d6fe950-18fc-440b-ad82-014a34669117" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.305799 5043 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-64rrj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.305856 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" podUID="8a79e2c5-003a-4929-8ba1-568e8ca6bb01" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.305886 5043 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hqtnq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.305932 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" podUID="f3a13dff-3c0c-4151-9514-42c40e8bc83f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.378628 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:14 crc kubenswrapper[5043]: E1125 07:18:14.379205 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:14.879159336 +0000 UTC m=+159.047355057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.480242 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:14 crc kubenswrapper[5043]: E1125 07:18:14.480668 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:14.98065249 +0000 UTC m=+159.148848211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.566038 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:14 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:14 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:14 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.566112 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.581434 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:14 crc kubenswrapper[5043]: E1125 07:18:14.581755 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:15.081740803 +0000 UTC m=+159.249936524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.682760 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:14 crc kubenswrapper[5043]: E1125 07:18:14.683096 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:15.183076482 +0000 UTC m=+159.351272203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.683644 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:14 crc kubenswrapper[5043]: E1125 07:18:14.684214 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:15.184205972 +0000 UTC m=+159.352401693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.784401 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:14 crc kubenswrapper[5043]: E1125 07:18:14.784802 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:15.284780892 +0000 UTC m=+159.452976613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.880779 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-66swt" Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.885513 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:14 crc kubenswrapper[5043]: E1125 07:18:14.885848 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:15.385835914 +0000 UTC m=+159.554031635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.986621 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:14 crc kubenswrapper[5043]: E1125 07:18:14.986725 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:15.486702881 +0000 UTC m=+159.654898612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:14 crc kubenswrapper[5043]: I1125 07:18:14.986884 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:14 crc kubenswrapper[5043]: E1125 07:18:14.987212 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:15.487201805 +0000 UTC m=+159.655397526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.089043 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:15 crc kubenswrapper[5043]: E1125 07:18:15.089189 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:15.589168611 +0000 UTC m=+159.757364332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.089294 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:15 crc kubenswrapper[5043]: E1125 07:18:15.089708 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:15.589698075 +0000 UTC m=+159.757893806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.190729 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:15 crc kubenswrapper[5043]: E1125 07:18:15.191109 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:15.691089955 +0000 UTC m=+159.859285676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.292325 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:15 crc kubenswrapper[5043]: E1125 07:18:15.292636 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:15.792595 +0000 UTC m=+159.960790721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.310199 5043 generic.go:334] "Generic (PLEG): container finished" podID="576eeef9-fcf9-4db0-a0cc-4083e03277f6" containerID="e51fddbfd0a9b8cb2d63a12d137aceda2471ab2b859c1b5387bf24f9a2713be1" exitCode=0 Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.310253 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" event={"ID":"576eeef9-fcf9-4db0-a0cc-4083e03277f6","Type":"ContainerDied","Data":"e51fddbfd0a9b8cb2d63a12d137aceda2471ab2b859c1b5387bf24f9a2713be1"} Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.312500 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b6j49" event={"ID":"5e696527-7a38-49ea-8517-9f286a7daff0","Type":"ContainerStarted","Data":"ec9f5824b4e8a22e6a55591b79df07d77838f95da7b0ba021278319c40be909f"} Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.314075 5043 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hqtnq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.314141 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" podUID="f3a13dff-3c0c-4151-9514-42c40e8bc83f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.393858 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:15 crc kubenswrapper[5043]: E1125 07:18:15.394054 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:15.894028641 +0000 UTC m=+160.062224452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.395651 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:15 crc kubenswrapper[5043]: E1125 07:18:15.396161 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:15.896149897 +0000 UTC m=+160.064345628 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.497599 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:15 crc kubenswrapper[5043]: E1125 07:18:15.497844 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:15.997814975 +0000 UTC m=+160.166010706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.497949 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:15 crc kubenswrapper[5043]: E1125 07:18:15.498240 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:15.998228117 +0000 UTC m=+160.166423838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.561515 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:15 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:15 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:15 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.561581 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.599311 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:15 crc kubenswrapper[5043]: E1125 07:18:15.599482 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:16.099450263 +0000 UTC m=+160.267645984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.599514 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:15 crc kubenswrapper[5043]: E1125 07:18:15.599917 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:16.099905425 +0000 UTC m=+160.268101146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.682931 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9rfgk" Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.701071 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:15 crc kubenswrapper[5043]: E1125 07:18:15.701269 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:16.201239245 +0000 UTC m=+160.369434976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.701414 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:15 crc kubenswrapper[5043]: E1125 07:18:15.701741 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:16.201727927 +0000 UTC m=+160.369923648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.802249 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:15 crc kubenswrapper[5043]: E1125 07:18:15.802540 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:16.302524663 +0000 UTC m=+160.470720374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:15 crc kubenswrapper[5043]: I1125 07:18:15.903807 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:15 crc kubenswrapper[5043]: E1125 07:18:15.904206 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:16.404187221 +0000 UTC m=+160.572382942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.005032 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:16 crc kubenswrapper[5043]: E1125 07:18:16.005251 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:16.505223563 +0000 UTC m=+160.673419294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.005342 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:16 crc kubenswrapper[5043]: E1125 07:18:16.005754 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:16.505743146 +0000 UTC m=+160.673938867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.106640 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:16 crc kubenswrapper[5043]: E1125 07:18:16.106835 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:16.606799298 +0000 UTC m=+160.774995019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.106925 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:16 crc kubenswrapper[5043]: E1125 07:18:16.107352 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:16.607335293 +0000 UTC m=+160.775531084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.207677 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:16 crc kubenswrapper[5043]: E1125 07:18:16.208139 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:16.708119298 +0000 UTC m=+160.876315019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.283374 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9dxfx"] Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.284794 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9dxfx" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.286982 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.309569 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:16 crc kubenswrapper[5043]: E1125 07:18:16.310025 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:16.810005811 +0000 UTC m=+160.978201532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.310560 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9dxfx"] Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.416468 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.416762 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gpb9\" (UniqueName: \"kubernetes.io/projected/3032faa6-654a-4e8f-b494-061c7de9688a-kube-api-access-2gpb9\") pod \"community-operators-9dxfx\" (UID: \"3032faa6-654a-4e8f-b494-061c7de9688a\") " pod="openshift-marketplace/community-operators-9dxfx" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.416813 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3032faa6-654a-4e8f-b494-061c7de9688a-catalog-content\") pod \"community-operators-9dxfx\" (UID: \"3032faa6-654a-4e8f-b494-061c7de9688a\") " pod="openshift-marketplace/community-operators-9dxfx" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.416831 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3032faa6-654a-4e8f-b494-061c7de9688a-utilities\") pod \"community-operators-9dxfx\" (UID: \"3032faa6-654a-4e8f-b494-061c7de9688a\") " pod="openshift-marketplace/community-operators-9dxfx" Nov 25 07:18:16 crc kubenswrapper[5043]: E1125 07:18:16.417130 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:16.917115363 +0000 UTC m=+161.085311084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.475560 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m6pf7"] Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.476516 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6pf7" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.482873 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.519257 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3032faa6-654a-4e8f-b494-061c7de9688a-catalog-content\") pod \"community-operators-9dxfx\" (UID: \"3032faa6-654a-4e8f-b494-061c7de9688a\") " pod="openshift-marketplace/community-operators-9dxfx" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.519293 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3032faa6-654a-4e8f-b494-061c7de9688a-utilities\") pod \"community-operators-9dxfx\" (UID: \"3032faa6-654a-4e8f-b494-061c7de9688a\") " pod="openshift-marketplace/community-operators-9dxfx" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.519352 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.519380 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gpb9\" (UniqueName: \"kubernetes.io/projected/3032faa6-654a-4e8f-b494-061c7de9688a-kube-api-access-2gpb9\") pod \"community-operators-9dxfx\" (UID: \"3032faa6-654a-4e8f-b494-061c7de9688a\") " pod="openshift-marketplace/community-operators-9dxfx" Nov 25 07:18:16 crc kubenswrapper[5043]: E1125 07:18:16.519863 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:17.019852429 +0000 UTC m=+161.188048150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.520083 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3032faa6-654a-4e8f-b494-061c7de9688a-utilities\") pod \"community-operators-9dxfx\" (UID: \"3032faa6-654a-4e8f-b494-061c7de9688a\") " pod="openshift-marketplace/community-operators-9dxfx" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.520307 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3032faa6-654a-4e8f-b494-061c7de9688a-catalog-content\") pod \"community-operators-9dxfx\" (UID: \"3032faa6-654a-4e8f-b494-061c7de9688a\") " pod="openshift-marketplace/community-operators-9dxfx" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.562241 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m6pf7"] Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.573293 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.574310 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.581268 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.581785 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.584342 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gpb9\" (UniqueName: \"kubernetes.io/projected/3032faa6-654a-4e8f-b494-061c7de9688a-kube-api-access-2gpb9\") pod \"community-operators-9dxfx\" (UID: \"3032faa6-654a-4e8f-b494-061c7de9688a\") " pod="openshift-marketplace/community-operators-9dxfx" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.591800 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:16 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:16 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:16 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.591865 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.600587 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.602100 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9dxfx" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.621161 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.621375 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b23210f-b31d-486b-9fe0-25c8b2ed2645-utilities\") pod \"certified-operators-m6pf7\" (UID: \"3b23210f-b31d-486b-9fe0-25c8b2ed2645\") " pod="openshift-marketplace/certified-operators-m6pf7" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.621407 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5vnq\" (UniqueName: \"kubernetes.io/projected/3b23210f-b31d-486b-9fe0-25c8b2ed2645-kube-api-access-c5vnq\") pod \"certified-operators-m6pf7\" (UID: \"3b23210f-b31d-486b-9fe0-25c8b2ed2645\") " pod="openshift-marketplace/certified-operators-m6pf7" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.621451 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b23210f-b31d-486b-9fe0-25c8b2ed2645-catalog-content\") pod \"certified-operators-m6pf7\" (UID: \"3b23210f-b31d-486b-9fe0-25c8b2ed2645\") " pod="openshift-marketplace/certified-operators-m6pf7" Nov 25 07:18:16 crc kubenswrapper[5043]: E1125 07:18:16.621548 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:17.121534307 +0000 UTC m=+161.289730018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.710512 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nbpdq"] Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.711362 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbpdq" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.722287 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b23210f-b31d-486b-9fe0-25c8b2ed2645-utilities\") pod \"certified-operators-m6pf7\" (UID: \"3b23210f-b31d-486b-9fe0-25c8b2ed2645\") " pod="openshift-marketplace/certified-operators-m6pf7" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.722335 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7246505a-06f1-4053-a44c-f0e3770ec755-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7246505a-06f1-4053-a44c-f0e3770ec755\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.722381 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5vnq\" (UniqueName: \"kubernetes.io/projected/3b23210f-b31d-486b-9fe0-25c8b2ed2645-kube-api-access-c5vnq\") pod \"certified-operators-m6pf7\" (UID: \"3b23210f-b31d-486b-9fe0-25c8b2ed2645\") " pod="openshift-marketplace/certified-operators-m6pf7" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.722406 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7246505a-06f1-4053-a44c-f0e3770ec755-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7246505a-06f1-4053-a44c-f0e3770ec755\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.722466 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.722492 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b23210f-b31d-486b-9fe0-25c8b2ed2645-catalog-content\") pod \"certified-operators-m6pf7\" (UID: \"3b23210f-b31d-486b-9fe0-25c8b2ed2645\") " pod="openshift-marketplace/certified-operators-m6pf7" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.723395 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b23210f-b31d-486b-9fe0-25c8b2ed2645-catalog-content\") pod \"certified-operators-m6pf7\" (UID: \"3b23210f-b31d-486b-9fe0-25c8b2ed2645\") " pod="openshift-marketplace/certified-operators-m6pf7" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.723708 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b23210f-b31d-486b-9fe0-25c8b2ed2645-utilities\") pod \"certified-operators-m6pf7\" (UID: \"3b23210f-b31d-486b-9fe0-25c8b2ed2645\") " pod="openshift-marketplace/certified-operators-m6pf7" Nov 25 07:18:16 crc kubenswrapper[5043]: E1125 07:18:16.724460 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:17.224446818 +0000 UTC m=+161.392642549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.728872 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nbpdq"] Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.761623 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5vnq\" (UniqueName: \"kubernetes.io/projected/3b23210f-b31d-486b-9fe0-25c8b2ed2645-kube-api-access-c5vnq\") pod \"certified-operators-m6pf7\" (UID: \"3b23210f-b31d-486b-9fe0-25c8b2ed2645\") " pod="openshift-marketplace/certified-operators-m6pf7" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.798083 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6pf7" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.824463 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.824712 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51d29444-594f-4078-a3bc-9fc83f17e4cf-utilities\") pod \"community-operators-nbpdq\" (UID: \"51d29444-594f-4078-a3bc-9fc83f17e4cf\") " pod="openshift-marketplace/community-operators-nbpdq" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.824752 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7246505a-06f1-4053-a44c-f0e3770ec755-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7246505a-06f1-4053-a44c-f0e3770ec755\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.824860 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87ntg\" (UniqueName: \"kubernetes.io/projected/51d29444-594f-4078-a3bc-9fc83f17e4cf-kube-api-access-87ntg\") pod \"community-operators-nbpdq\" (UID: \"51d29444-594f-4078-a3bc-9fc83f17e4cf\") " pod="openshift-marketplace/community-operators-nbpdq" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.824901 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7246505a-06f1-4053-a44c-f0e3770ec755-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7246505a-06f1-4053-a44c-f0e3770ec755\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.824924 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51d29444-594f-4078-a3bc-9fc83f17e4cf-catalog-content\") pod \"community-operators-nbpdq\" (UID: \"51d29444-594f-4078-a3bc-9fc83f17e4cf\") " pod="openshift-marketplace/community-operators-nbpdq" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.825024 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7246505a-06f1-4053-a44c-f0e3770ec755-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7246505a-06f1-4053-a44c-f0e3770ec755\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 07:18:16 crc kubenswrapper[5043]: E1125 07:18:16.825155 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:17.325130751 +0000 UTC m=+161.493326472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.852072 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.871846 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7246505a-06f1-4053-a44c-f0e3770ec755-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7246505a-06f1-4053-a44c-f0e3770ec755\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.881838 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xzgnv"] Nov 25 07:18:16 crc kubenswrapper[5043]: E1125 07:18:16.882027 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576eeef9-fcf9-4db0-a0cc-4083e03277f6" containerName="collect-profiles" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.882039 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="576eeef9-fcf9-4db0-a0cc-4083e03277f6" containerName="collect-profiles" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.882132 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="576eeef9-fcf9-4db0-a0cc-4083e03277f6" containerName="collect-profiles" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.882769 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xzgnv" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.922904 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xzgnv"] Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.925985 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/576eeef9-fcf9-4db0-a0cc-4083e03277f6-secret-volume\") pod \"576eeef9-fcf9-4db0-a0cc-4083e03277f6\" (UID: \"576eeef9-fcf9-4db0-a0cc-4083e03277f6\") " Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.926007 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.926040 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78z6p\" (UniqueName: \"kubernetes.io/projected/576eeef9-fcf9-4db0-a0cc-4083e03277f6-kube-api-access-78z6p\") pod \"576eeef9-fcf9-4db0-a0cc-4083e03277f6\" (UID: \"576eeef9-fcf9-4db0-a0cc-4083e03277f6\") " Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.926258 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/576eeef9-fcf9-4db0-a0cc-4083e03277f6-config-volume\") pod \"576eeef9-fcf9-4db0-a0cc-4083e03277f6\" (UID: \"576eeef9-fcf9-4db0-a0cc-4083e03277f6\") " Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.926393 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184c9204-08a1-4df5-9451-6902efca14e2-utilities\") pod \"certified-operators-xzgnv\" (UID: \"184c9204-08a1-4df5-9451-6902efca14e2\") " pod="openshift-marketplace/certified-operators-xzgnv" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.926440 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87ntg\" (UniqueName: \"kubernetes.io/projected/51d29444-594f-4078-a3bc-9fc83f17e4cf-kube-api-access-87ntg\") pod \"community-operators-nbpdq\" (UID: \"51d29444-594f-4078-a3bc-9fc83f17e4cf\") " pod="openshift-marketplace/community-operators-nbpdq" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.926462 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184c9204-08a1-4df5-9451-6902efca14e2-catalog-content\") pod \"certified-operators-xzgnv\" (UID: \"184c9204-08a1-4df5-9451-6902efca14e2\") " pod="openshift-marketplace/certified-operators-xzgnv" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.926483 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51d29444-594f-4078-a3bc-9fc83f17e4cf-catalog-content\") pod \"community-operators-nbpdq\" (UID: \"51d29444-594f-4078-a3bc-9fc83f17e4cf\") " pod="openshift-marketplace/community-operators-nbpdq" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.926511 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51d29444-594f-4078-a3bc-9fc83f17e4cf-utilities\") pod \"community-operators-nbpdq\" (UID: \"51d29444-594f-4078-a3bc-9fc83f17e4cf\") " pod="openshift-marketplace/community-operators-nbpdq" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.926531 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clf68\" (UniqueName: \"kubernetes.io/projected/184c9204-08a1-4df5-9451-6902efca14e2-kube-api-access-clf68\") pod \"certified-operators-xzgnv\" (UID: \"184c9204-08a1-4df5-9451-6902efca14e2\") " pod="openshift-marketplace/certified-operators-xzgnv" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.926573 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:16 crc kubenswrapper[5043]: E1125 07:18:16.926853 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:17.42684214 +0000 UTC m=+161.595037861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.927703 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/576eeef9-fcf9-4db0-a0cc-4083e03277f6-config-volume" (OuterVolumeSpecName: "config-volume") pod "576eeef9-fcf9-4db0-a0cc-4083e03277f6" (UID: "576eeef9-fcf9-4db0-a0cc-4083e03277f6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.928045 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51d29444-594f-4078-a3bc-9fc83f17e4cf-utilities\") pod \"community-operators-nbpdq\" (UID: \"51d29444-594f-4078-a3bc-9fc83f17e4cf\") " pod="openshift-marketplace/community-operators-nbpdq" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.928256 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51d29444-594f-4078-a3bc-9fc83f17e4cf-catalog-content\") pod \"community-operators-nbpdq\" (UID: \"51d29444-594f-4078-a3bc-9fc83f17e4cf\") " pod="openshift-marketplace/community-operators-nbpdq" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.930337 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/576eeef9-fcf9-4db0-a0cc-4083e03277f6-kube-api-access-78z6p" (OuterVolumeSpecName: "kube-api-access-78z6p") pod "576eeef9-fcf9-4db0-a0cc-4083e03277f6" (UID: "576eeef9-fcf9-4db0-a0cc-4083e03277f6"). InnerVolumeSpecName "kube-api-access-78z6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.936293 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/576eeef9-fcf9-4db0-a0cc-4083e03277f6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "576eeef9-fcf9-4db0-a0cc-4083e03277f6" (UID: "576eeef9-fcf9-4db0-a0cc-4083e03277f6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:18:16 crc kubenswrapper[5043]: I1125 07:18:16.963470 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87ntg\" (UniqueName: \"kubernetes.io/projected/51d29444-594f-4078-a3bc-9fc83f17e4cf-kube-api-access-87ntg\") pod \"community-operators-nbpdq\" (UID: \"51d29444-594f-4078-a3bc-9fc83f17e4cf\") " pod="openshift-marketplace/community-operators-nbpdq" Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.027583 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.028069 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184c9204-08a1-4df5-9451-6902efca14e2-utilities\") pod \"certified-operators-xzgnv\" (UID: \"184c9204-08a1-4df5-9451-6902efca14e2\") " pod="openshift-marketplace/certified-operators-xzgnv" Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.028135 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184c9204-08a1-4df5-9451-6902efca14e2-catalog-content\") pod \"certified-operators-xzgnv\" (UID: \"184c9204-08a1-4df5-9451-6902efca14e2\") " pod="openshift-marketplace/certified-operators-xzgnv" Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.028190 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clf68\" (UniqueName: \"kubernetes.io/projected/184c9204-08a1-4df5-9451-6902efca14e2-kube-api-access-clf68\") pod \"certified-operators-xzgnv\" (UID: \"184c9204-08a1-4df5-9451-6902efca14e2\") " pod="openshift-marketplace/certified-operators-xzgnv" Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.028271 5043 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/576eeef9-fcf9-4db0-a0cc-4083e03277f6-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.028286 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78z6p\" (UniqueName: \"kubernetes.io/projected/576eeef9-fcf9-4db0-a0cc-4083e03277f6-kube-api-access-78z6p\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.028299 5043 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/576eeef9-fcf9-4db0-a0cc-4083e03277f6-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:17 crc kubenswrapper[5043]: E1125 07:18:17.028718 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:17.528700154 +0000 UTC m=+161.696895885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.029111 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184c9204-08a1-4df5-9451-6902efca14e2-utilities\") pod \"certified-operators-xzgnv\" (UID: \"184c9204-08a1-4df5-9451-6902efca14e2\") " pod="openshift-marketplace/certified-operators-xzgnv" Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.029369 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184c9204-08a1-4df5-9451-6902efca14e2-catalog-content\") pod \"certified-operators-xzgnv\" (UID: \"184c9204-08a1-4df5-9451-6902efca14e2\") " pod="openshift-marketplace/certified-operators-xzgnv" Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.058402 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbpdq" Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.069726 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clf68\" (UniqueName: \"kubernetes.io/projected/184c9204-08a1-4df5-9451-6902efca14e2-kube-api-access-clf68\") pod \"certified-operators-xzgnv\" (UID: \"184c9204-08a1-4df5-9451-6902efca14e2\") " pod="openshift-marketplace/certified-operators-xzgnv" Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.075775 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9dxfx"] Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.130549 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:17 crc kubenswrapper[5043]: E1125 07:18:17.130912 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:17.630897276 +0000 UTC m=+161.799093007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.232100 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:17 crc kubenswrapper[5043]: E1125 07:18:17.234868 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:17.734840394 +0000 UTC m=+161.903036115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.251678 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xzgnv" Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.261085 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m6pf7"] Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.276301 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.276347 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:18:17 crc kubenswrapper[5043]: W1125 07:18:17.301144 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b23210f_b31d_486b_9fe0_25c8b2ed2645.slice/crio-b91d02d25de14442482eb86d6116d24cee0e08bb2b6caa4e6f2f5c3bcf1f669e WatchSource:0}: Error finding container b91d02d25de14442482eb86d6116d24cee0e08bb2b6caa4e6f2f5c3bcf1f669e: Status 404 returned error can't find the container with id b91d02d25de14442482eb86d6116d24cee0e08bb2b6caa4e6f2f5c3bcf1f669e Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.342156 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:17 crc kubenswrapper[5043]: E1125 07:18:17.342479 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:17.842467769 +0000 UTC m=+162.010663490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.351229 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" event={"ID":"576eeef9-fcf9-4db0-a0cc-4083e03277f6","Type":"ContainerDied","Data":"abd66b60581dccf7228eb6f9163fb433f4567f0aee3dbc04306395a516355e08"} Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.351253 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abd66b60581dccf7228eb6f9163fb433f4567f0aee3dbc04306395a516355e08" Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.351324 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl" Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.356733 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b6j49" event={"ID":"5e696527-7a38-49ea-8517-9f286a7daff0","Type":"ContainerStarted","Data":"53c0744979803819470213177ace11bb1109a8af77e9e5c6c4b7f6ff85979ca3"} Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.359567 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dxfx" event={"ID":"3032faa6-654a-4e8f-b494-061c7de9688a","Type":"ContainerStarted","Data":"7534ff3cb3f76382d27c163483ea5127b44edaf9bb8a07f428c1b25462ff30c1"} Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.360421 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6pf7" event={"ID":"3b23210f-b31d-486b-9fe0-25c8b2ed2645","Type":"ContainerStarted","Data":"b91d02d25de14442482eb86d6116d24cee0e08bb2b6caa4e6f2f5c3bcf1f669e"} Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.444135 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:17 crc kubenswrapper[5043]: E1125 07:18:17.444457 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:17.944442925 +0000 UTC m=+162.112638636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.481335 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.545278 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:17 crc kubenswrapper[5043]: E1125 07:18:17.545633 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:18.045620661 +0000 UTC m=+162.213816382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.563043 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:17 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:17 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:17 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.563087 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.570637 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nbpdq"] Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.646766 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:17 crc kubenswrapper[5043]: E1125 07:18:17.647219 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:18.147199117 +0000 UTC m=+162.315394848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.665414 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xzgnv"] Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.748752 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:17 crc kubenswrapper[5043]: E1125 07:18:17.749159 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:18.249146612 +0000 UTC m=+162.417342333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.849845 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:17 crc kubenswrapper[5043]: E1125 07:18:17.850113 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:18.350098221 +0000 UTC m=+162.518293942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.875412 5043 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 25 07:18:17 crc kubenswrapper[5043]: I1125 07:18:17.950969 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:17 crc kubenswrapper[5043]: E1125 07:18:17.951360 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:18.451341569 +0000 UTC m=+162.619537290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.054164 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:18 crc kubenswrapper[5043]: E1125 07:18:18.054462 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 07:18:18.554444664 +0000 UTC m=+162.722640385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.156015 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:18 crc kubenswrapper[5043]: E1125 07:18:18.156416 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 07:18:18.65639882 +0000 UTC m=+162.824594541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jj8v" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.168563 5043 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-25T07:18:17.875742505Z","Handler":null,"Name":""} Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.176866 5043 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.176908 5043 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.256753 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.260627 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.358741 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.369481 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzgnv" event={"ID":"184c9204-08a1-4df5-9451-6902efca14e2","Type":"ContainerStarted","Data":"4acdd111cfa89817c8aaf4301d97cb6b78e9c548162632d0644b8cee7f02d61e"} Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.369545 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzgnv" event={"ID":"184c9204-08a1-4df5-9451-6902efca14e2","Type":"ContainerStarted","Data":"1bf40cb87f4f94973bf795cc73eae4c4fba529bca39e7ec8f3d7f17c20a950f0"} Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.371588 5043 generic.go:334] "Generic (PLEG): container finished" podID="51d29444-594f-4078-a3bc-9fc83f17e4cf" containerID="dd0c32e41eef6d9abdbd3a448a70948a36bc6fd30884bb43ade1e37e6d813eba" exitCode=0 Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.371672 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbpdq" event={"ID":"51d29444-594f-4078-a3bc-9fc83f17e4cf","Type":"ContainerDied","Data":"dd0c32e41eef6d9abdbd3a448a70948a36bc6fd30884bb43ade1e37e6d813eba"} Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.371694 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbpdq" event={"ID":"51d29444-594f-4078-a3bc-9fc83f17e4cf","Type":"ContainerStarted","Data":"d7a9cc2cd3b278637da510db28fe04cff03270e13db6ec25f1f36e16498944a5"} Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.373854 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.374014 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b6j49" event={"ID":"5e696527-7a38-49ea-8517-9f286a7daff0","Type":"ContainerStarted","Data":"26151eebe02af5f9ba7805db35c0aed42d21a47da2e33fd0c10a49f3e73dfe1d"} Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.374053 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b6j49" event={"ID":"5e696527-7a38-49ea-8517-9f286a7daff0","Type":"ContainerStarted","Data":"17cdd195453435081a9134d139995efbe5aa6580f6ea5342716c8a540308c50e"} Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.377424 5043 generic.go:334] "Generic (PLEG): container finished" podID="3032faa6-654a-4e8f-b494-061c7de9688a" containerID="63004c517244ba6424564ac790e1e5c6a5b5073187e863e639380213cbadddf0" exitCode=0 Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.377506 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dxfx" event={"ID":"3032faa6-654a-4e8f-b494-061c7de9688a","Type":"ContainerDied","Data":"63004c517244ba6424564ac790e1e5c6a5b5073187e863e639380213cbadddf0"} Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.379887 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7246505a-06f1-4053-a44c-f0e3770ec755","Type":"ContainerStarted","Data":"f09e4e7726fae4fc6eb9c75799b2bf87ef79b6ef8fc8d6a770e4e9de232fdb2f"} Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.379928 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7246505a-06f1-4053-a44c-f0e3770ec755","Type":"ContainerStarted","Data":"27a8e63b78db56843269c0075bcbd87d895365ba86483d510f38f379f6ef6927"} Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.382178 5043 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.382228 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.385334 5043 generic.go:334] "Generic (PLEG): container finished" podID="3b23210f-b31d-486b-9fe0-25c8b2ed2645" containerID="53e8c8b663a0acc09cf5e854230794cdfe9e37acf99791d918e983a38ed33662" exitCode=0 Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.385390 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6pf7" event={"ID":"3b23210f-b31d-486b-9fe0-25c8b2ed2645","Type":"ContainerDied","Data":"53e8c8b663a0acc09cf5e854230794cdfe9e37acf99791d918e983a38ed33662"} Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.428666 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jj8v\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.465404 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-56hks"] Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.466379 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56hks" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.467534 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.473770 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-56hks"] Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.509405 5043 patch_prober.go:28] interesting pod/downloads-7954f5f757-4q5v5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.509802 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4q5v5" podUID="ed1bbbdd-aa02-4472-867f-ef6f2c991728" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.509419 5043 patch_prober.go:28] interesting pod/downloads-7954f5f757-4q5v5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.509884 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4q5v5" podUID="ed1bbbdd-aa02-4472-867f-ef6f2c991728" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.557958 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.561460 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:18 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:18 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:18 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.561511 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.561584 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-utilities\") pod \"redhat-marketplace-56hks\" (UID: \"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f\") " pod="openshift-marketplace/redhat-marketplace-56hks" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.561648 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-catalog-content\") pod \"redhat-marketplace-56hks\" (UID: \"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f\") " pod="openshift-marketplace/redhat-marketplace-56hks" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.561725 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs9l7\" (UniqueName: \"kubernetes.io/projected/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-kube-api-access-cs9l7\") pod \"redhat-marketplace-56hks\" (UID: \"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f\") " pod="openshift-marketplace/redhat-marketplace-56hks" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.663485 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.664454 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-utilities\") pod \"redhat-marketplace-56hks\" (UID: \"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f\") " pod="openshift-marketplace/redhat-marketplace-56hks" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.664553 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-catalog-content\") pod \"redhat-marketplace-56hks\" (UID: \"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f\") " pod="openshift-marketplace/redhat-marketplace-56hks" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.664663 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs9l7\" (UniqueName: \"kubernetes.io/projected/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-kube-api-access-cs9l7\") pod \"redhat-marketplace-56hks\" (UID: \"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f\") " pod="openshift-marketplace/redhat-marketplace-56hks" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.665984 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-catalog-content\") pod \"redhat-marketplace-56hks\" (UID: \"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f\") " pod="openshift-marketplace/redhat-marketplace-56hks" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.666045 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-utilities\") pod \"redhat-marketplace-56hks\" (UID: \"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f\") " pod="openshift-marketplace/redhat-marketplace-56hks" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.675922 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mbwfq" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.696922 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs9l7\" (UniqueName: \"kubernetes.io/projected/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-kube-api-access-cs9l7\") pod \"redhat-marketplace-56hks\" (UID: \"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f\") " pod="openshift-marketplace/redhat-marketplace-56hks" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.725955 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.729231 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.739980 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jxz26" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.802175 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56hks" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.898689 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-88657"] Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.902434 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-88657" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.922289 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-88657"] Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.973376 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d09ac1-0cde-4f72-894c-07d74837ef3c-utilities\") pod \"redhat-marketplace-88657\" (UID: \"e2d09ac1-0cde-4f72-894c-07d74837ef3c\") " pod="openshift-marketplace/redhat-marketplace-88657" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.973492 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d09ac1-0cde-4f72-894c-07d74837ef3c-catalog-content\") pod \"redhat-marketplace-88657\" (UID: \"e2d09ac1-0cde-4f72-894c-07d74837ef3c\") " pod="openshift-marketplace/redhat-marketplace-88657" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.973542 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj2td\" (UniqueName: \"kubernetes.io/projected/e2d09ac1-0cde-4f72-894c-07d74837ef3c-kube-api-access-sj2td\") pod \"redhat-marketplace-88657\" (UID: \"e2d09ac1-0cde-4f72-894c-07d74837ef3c\") " pod="openshift-marketplace/redhat-marketplace-88657" Nov 25 07:18:18 crc kubenswrapper[5043]: I1125 07:18:18.977708 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.037619 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9jj8v"] Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.046354 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.057212 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.074707 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d09ac1-0cde-4f72-894c-07d74837ef3c-utilities\") pod \"redhat-marketplace-88657\" (UID: \"e2d09ac1-0cde-4f72-894c-07d74837ef3c\") " pod="openshift-marketplace/redhat-marketplace-88657" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.074749 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d09ac1-0cde-4f72-894c-07d74837ef3c-catalog-content\") pod \"redhat-marketplace-88657\" (UID: \"e2d09ac1-0cde-4f72-894c-07d74837ef3c\") " pod="openshift-marketplace/redhat-marketplace-88657" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.074796 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj2td\" (UniqueName: \"kubernetes.io/projected/e2d09ac1-0cde-4f72-894c-07d74837ef3c-kube-api-access-sj2td\") pod \"redhat-marketplace-88657\" (UID: \"e2d09ac1-0cde-4f72-894c-07d74837ef3c\") " pod="openshift-marketplace/redhat-marketplace-88657" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.076448 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d09ac1-0cde-4f72-894c-07d74837ef3c-catalog-content\") pod \"redhat-marketplace-88657\" (UID: \"e2d09ac1-0cde-4f72-894c-07d74837ef3c\") " pod="openshift-marketplace/redhat-marketplace-88657" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.076443 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d09ac1-0cde-4f72-894c-07d74837ef3c-utilities\") pod \"redhat-marketplace-88657\" (UID: \"e2d09ac1-0cde-4f72-894c-07d74837ef3c\") " pod="openshift-marketplace/redhat-marketplace-88657" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.095538 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj2td\" (UniqueName: \"kubernetes.io/projected/e2d09ac1-0cde-4f72-894c-07d74837ef3c-kube-api-access-sj2td\") pod \"redhat-marketplace-88657\" (UID: \"e2d09ac1-0cde-4f72-894c-07d74837ef3c\") " pod="openshift-marketplace/redhat-marketplace-88657" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.238304 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-88657" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.291219 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-56hks"] Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.357334 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.365407 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.365459 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.366624 5043 patch_prober.go:28] interesting pod/console-f9d7485db-lbz4p container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.366688 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-lbz4p" podUID="b18ece39-f2f5-41f9-b2e1-79f9f880791b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.391263 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.393376 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56hks" event={"ID":"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f","Type":"ContainerStarted","Data":"5ea05f5b068e9f65d354819aa43ef0d54d8f15c8ccf3a46ec1536088f0f8a981"} Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.394591 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" event={"ID":"56d9ce8c-65f4-4482-860f-a7009c96e356","Type":"ContainerStarted","Data":"653f9fefc7466b200299197a43f58438b80638f8df660e63ad01d3b69975be90"} Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.401648 5043 generic.go:334] "Generic (PLEG): container finished" podID="184c9204-08a1-4df5-9451-6902efca14e2" containerID="4acdd111cfa89817c8aaf4301d97cb6b78e9c548162632d0644b8cee7f02d61e" exitCode=0 Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.402656 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzgnv" event={"ID":"184c9204-08a1-4df5-9451-6902efca14e2","Type":"ContainerDied","Data":"4acdd111cfa89817c8aaf4301d97cb6b78e9c548162632d0644b8cee7f02d61e"} Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.453837 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.453817941 podStartE2EDuration="3.453817941s" podCreationTimestamp="2025-11-25 07:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:19.433720473 +0000 UTC m=+163.601916194" watchObservedRunningTime="2025-11-25 07:18:19.453817941 +0000 UTC m=+163.622013662" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.455700 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-b6j49" podStartSLOduration=13.45568503 podStartE2EDuration="13.45568503s" podCreationTimestamp="2025-11-25 07:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:19.452000694 +0000 UTC m=+163.620196415" watchObservedRunningTime="2025-11-25 07:18:19.45568503 +0000 UTC m=+163.623880751" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.471928 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-88657"] Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.475062 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r9cgm"] Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.478378 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9cgm" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.481885 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.486914 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9cgm"] Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.563843 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:19 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:19 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:19 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.563918 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.581135 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-665kk\" (UniqueName: \"kubernetes.io/projected/bc2101f3-91d7-43e8-b118-7966b9633c1e-kube-api-access-665kk\") pod \"redhat-operators-r9cgm\" (UID: \"bc2101f3-91d7-43e8-b118-7966b9633c1e\") " pod="openshift-marketplace/redhat-operators-r9cgm" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.581200 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc2101f3-91d7-43e8-b118-7966b9633c1e-utilities\") pod \"redhat-operators-r9cgm\" (UID: \"bc2101f3-91d7-43e8-b118-7966b9633c1e\") " pod="openshift-marketplace/redhat-operators-r9cgm" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.581246 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc2101f3-91d7-43e8-b118-7966b9633c1e-catalog-content\") pod \"redhat-operators-r9cgm\" (UID: \"bc2101f3-91d7-43e8-b118-7966b9633c1e\") " pod="openshift-marketplace/redhat-operators-r9cgm" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.597817 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-64rrj" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.600304 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j7vsb" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.607401 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.682371 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-665kk\" (UniqueName: \"kubernetes.io/projected/bc2101f3-91d7-43e8-b118-7966b9633c1e-kube-api-access-665kk\") pod \"redhat-operators-r9cgm\" (UID: \"bc2101f3-91d7-43e8-b118-7966b9633c1e\") " pod="openshift-marketplace/redhat-operators-r9cgm" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.682520 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc2101f3-91d7-43e8-b118-7966b9633c1e-utilities\") pod \"redhat-operators-r9cgm\" (UID: \"bc2101f3-91d7-43e8-b118-7966b9633c1e\") " pod="openshift-marketplace/redhat-operators-r9cgm" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.682593 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc2101f3-91d7-43e8-b118-7966b9633c1e-catalog-content\") pod \"redhat-operators-r9cgm\" (UID: \"bc2101f3-91d7-43e8-b118-7966b9633c1e\") " pod="openshift-marketplace/redhat-operators-r9cgm" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.683185 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc2101f3-91d7-43e8-b118-7966b9633c1e-catalog-content\") pod \"redhat-operators-r9cgm\" (UID: \"bc2101f3-91d7-43e8-b118-7966b9633c1e\") " pod="openshift-marketplace/redhat-operators-r9cgm" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.683541 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc2101f3-91d7-43e8-b118-7966b9633c1e-utilities\") pod \"redhat-operators-r9cgm\" (UID: \"bc2101f3-91d7-43e8-b118-7966b9633c1e\") " pod="openshift-marketplace/redhat-operators-r9cgm" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.716219 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-665kk\" (UniqueName: \"kubernetes.io/projected/bc2101f3-91d7-43e8-b118-7966b9633c1e-kube-api-access-665kk\") pod \"redhat-operators-r9cgm\" (UID: \"bc2101f3-91d7-43e8-b118-7966b9633c1e\") " pod="openshift-marketplace/redhat-operators-r9cgm" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.800767 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9cgm" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.867391 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nww2p"] Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.868596 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nww2p" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.880171 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nww2p"] Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.887299 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31041d64-91fd-40b6-a970-8a0ec1fa7aff-catalog-content\") pod \"redhat-operators-nww2p\" (UID: \"31041d64-91fd-40b6-a970-8a0ec1fa7aff\") " pod="openshift-marketplace/redhat-operators-nww2p" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.887379 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31041d64-91fd-40b6-a970-8a0ec1fa7aff-utilities\") pod \"redhat-operators-nww2p\" (UID: \"31041d64-91fd-40b6-a970-8a0ec1fa7aff\") " pod="openshift-marketplace/redhat-operators-nww2p" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.887419 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdsw2\" (UniqueName: \"kubernetes.io/projected/31041d64-91fd-40b6-a970-8a0ec1fa7aff-kube-api-access-qdsw2\") pod \"redhat-operators-nww2p\" (UID: \"31041d64-91fd-40b6-a970-8a0ec1fa7aff\") " pod="openshift-marketplace/redhat-operators-nww2p" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.989701 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31041d64-91fd-40b6-a970-8a0ec1fa7aff-catalog-content\") pod \"redhat-operators-nww2p\" (UID: \"31041d64-91fd-40b6-a970-8a0ec1fa7aff\") " pod="openshift-marketplace/redhat-operators-nww2p" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.990074 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31041d64-91fd-40b6-a970-8a0ec1fa7aff-utilities\") pod \"redhat-operators-nww2p\" (UID: \"31041d64-91fd-40b6-a970-8a0ec1fa7aff\") " pod="openshift-marketplace/redhat-operators-nww2p" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.990121 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdsw2\" (UniqueName: \"kubernetes.io/projected/31041d64-91fd-40b6-a970-8a0ec1fa7aff-kube-api-access-qdsw2\") pod \"redhat-operators-nww2p\" (UID: \"31041d64-91fd-40b6-a970-8a0ec1fa7aff\") " pod="openshift-marketplace/redhat-operators-nww2p" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.990779 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31041d64-91fd-40b6-a970-8a0ec1fa7aff-catalog-content\") pod \"redhat-operators-nww2p\" (UID: \"31041d64-91fd-40b6-a970-8a0ec1fa7aff\") " pod="openshift-marketplace/redhat-operators-nww2p" Nov 25 07:18:19 crc kubenswrapper[5043]: I1125 07:18:19.991007 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31041d64-91fd-40b6-a970-8a0ec1fa7aff-utilities\") pod \"redhat-operators-nww2p\" (UID: \"31041d64-91fd-40b6-a970-8a0ec1fa7aff\") " pod="openshift-marketplace/redhat-operators-nww2p" Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.013287 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdsw2\" (UniqueName: \"kubernetes.io/projected/31041d64-91fd-40b6-a970-8a0ec1fa7aff-kube-api-access-qdsw2\") pod \"redhat-operators-nww2p\" (UID: \"31041d64-91fd-40b6-a970-8a0ec1fa7aff\") " pod="openshift-marketplace/redhat-operators-nww2p" Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.034434 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9cgm"] Nov 25 07:18:20 crc kubenswrapper[5043]: W1125 07:18:20.041465 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc2101f3_91d7_43e8_b118_7966b9633c1e.slice/crio-489ab6daedeed1c774541255b417debe0ca1c0d6e68be3244a1dc9aeb76abebc WatchSource:0}: Error finding container 489ab6daedeed1c774541255b417debe0ca1c0d6e68be3244a1dc9aeb76abebc: Status 404 returned error can't find the container with id 489ab6daedeed1c774541255b417debe0ca1c0d6e68be3244a1dc9aeb76abebc Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.214132 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nww2p" Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.414622 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nww2p"] Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.417573 5043 generic.go:334] "Generic (PLEG): container finished" podID="ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f" containerID="783730da296c84abfe826e0292c226b327278a5191f6851bef956297e4970a0c" exitCode=0 Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.417652 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56hks" event={"ID":"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f","Type":"ContainerDied","Data":"783730da296c84abfe826e0292c226b327278a5191f6851bef956297e4970a0c"} Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.423279 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" event={"ID":"56d9ce8c-65f4-4482-860f-a7009c96e356","Type":"ContainerStarted","Data":"70ce26b185999d12a68450b62d1f21bb2dff3ffd001f976d7916d026d7899360"} Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.423444 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.424708 5043 generic.go:334] "Generic (PLEG): container finished" podID="bc2101f3-91d7-43e8-b118-7966b9633c1e" containerID="c6f2b105b8bff9db0b9df22b029967d977683bc2a72bbdba17911089d9dc7169" exitCode=0 Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.424750 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9cgm" event={"ID":"bc2101f3-91d7-43e8-b118-7966b9633c1e","Type":"ContainerDied","Data":"c6f2b105b8bff9db0b9df22b029967d977683bc2a72bbdba17911089d9dc7169"} Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.424770 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9cgm" event={"ID":"bc2101f3-91d7-43e8-b118-7966b9633c1e","Type":"ContainerStarted","Data":"489ab6daedeed1c774541255b417debe0ca1c0d6e68be3244a1dc9aeb76abebc"} Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.459806 5043 generic.go:334] "Generic (PLEG): container finished" podID="7246505a-06f1-4053-a44c-f0e3770ec755" containerID="f09e4e7726fae4fc6eb9c75799b2bf87ef79b6ef8fc8d6a770e4e9de232fdb2f" exitCode=0 Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.459922 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7246505a-06f1-4053-a44c-f0e3770ec755","Type":"ContainerDied","Data":"f09e4e7726fae4fc6eb9c75799b2bf87ef79b6ef8fc8d6a770e4e9de232fdb2f"} Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.468351 5043 generic.go:334] "Generic (PLEG): container finished" podID="e2d09ac1-0cde-4f72-894c-07d74837ef3c" containerID="e72fb1c1b061d23388227a2995cfd6624c8deac65c8f4da3b03534165fb53403" exitCode=0 Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.468399 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88657" event={"ID":"e2d09ac1-0cde-4f72-894c-07d74837ef3c","Type":"ContainerDied","Data":"e72fb1c1b061d23388227a2995cfd6624c8deac65c8f4da3b03534165fb53403"} Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.468429 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88657" event={"ID":"e2d09ac1-0cde-4f72-894c-07d74837ef3c","Type":"ContainerStarted","Data":"b8d05512a2dbb56966d55afee01ed1dcee7fdd1a60cfd6508dca20e48cfa267d"} Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.482149 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" podStartSLOduration=137.482128909 podStartE2EDuration="2m17.482128909s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:20.478192435 +0000 UTC m=+164.646388176" watchObservedRunningTime="2025-11-25 07:18:20.482128909 +0000 UTC m=+164.650324640" Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.560983 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:20 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:20 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:20 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.561031 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.722695 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.726464 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.728953 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.729465 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.748924 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.803405 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9b762ed-9a7c-443f-ba4d-794f57349e1d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c9b762ed-9a7c-443f-ba4d-794f57349e1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.803469 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9b762ed-9a7c-443f-ba4d-794f57349e1d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c9b762ed-9a7c-443f-ba4d-794f57349e1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.917271 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9b762ed-9a7c-443f-ba4d-794f57349e1d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c9b762ed-9a7c-443f-ba4d-794f57349e1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.917724 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9b762ed-9a7c-443f-ba4d-794f57349e1d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c9b762ed-9a7c-443f-ba4d-794f57349e1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.917392 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9b762ed-9a7c-443f-ba4d-794f57349e1d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c9b762ed-9a7c-443f-ba4d-794f57349e1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 07:18:20 crc kubenswrapper[5043]: I1125 07:18:20.955012 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9b762ed-9a7c-443f-ba4d-794f57349e1d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c9b762ed-9a7c-443f-ba4d-794f57349e1d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 07:18:21 crc kubenswrapper[5043]: I1125 07:18:21.093882 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 07:18:21 crc kubenswrapper[5043]: I1125 07:18:21.466286 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 07:18:21 crc kubenswrapper[5043]: I1125 07:18:21.476752 5043 generic.go:334] "Generic (PLEG): container finished" podID="31041d64-91fd-40b6-a970-8a0ec1fa7aff" containerID="837a0ba578c7ded82b830e998631af21ac6f5a2aa99c898a20a51587d6340ad0" exitCode=0 Nov 25 07:18:21 crc kubenswrapper[5043]: I1125 07:18:21.477556 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nww2p" event={"ID":"31041d64-91fd-40b6-a970-8a0ec1fa7aff","Type":"ContainerDied","Data":"837a0ba578c7ded82b830e998631af21ac6f5a2aa99c898a20a51587d6340ad0"} Nov 25 07:18:21 crc kubenswrapper[5043]: I1125 07:18:21.477593 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nww2p" event={"ID":"31041d64-91fd-40b6-a970-8a0ec1fa7aff","Type":"ContainerStarted","Data":"49d2faa0a8ca834f16625a019428b316a10a182acc49c7d390eb335c6740c34f"} Nov 25 07:18:21 crc kubenswrapper[5043]: W1125 07:18:21.497227 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc9b762ed_9a7c_443f_ba4d_794f57349e1d.slice/crio-7f676c6cb4ae92107879682a6ce23e0e43f7aee10ddd1ae89a4f65c2f7e15207 WatchSource:0}: Error finding container 7f676c6cb4ae92107879682a6ce23e0e43f7aee10ddd1ae89a4f65c2f7e15207: Status 404 returned error can't find the container with id 7f676c6cb4ae92107879682a6ce23e0e43f7aee10ddd1ae89a4f65c2f7e15207 Nov 25 07:18:21 crc kubenswrapper[5043]: I1125 07:18:21.561825 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:21 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:21 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:21 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:21 crc kubenswrapper[5043]: I1125 07:18:21.561870 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:21 crc kubenswrapper[5043]: I1125 07:18:21.816992 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 07:18:21 crc kubenswrapper[5043]: I1125 07:18:21.946446 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7246505a-06f1-4053-a44c-f0e3770ec755-kubelet-dir\") pod \"7246505a-06f1-4053-a44c-f0e3770ec755\" (UID: \"7246505a-06f1-4053-a44c-f0e3770ec755\") " Nov 25 07:18:21 crc kubenswrapper[5043]: I1125 07:18:21.946803 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7246505a-06f1-4053-a44c-f0e3770ec755-kube-api-access\") pod \"7246505a-06f1-4053-a44c-f0e3770ec755\" (UID: \"7246505a-06f1-4053-a44c-f0e3770ec755\") " Nov 25 07:18:21 crc kubenswrapper[5043]: I1125 07:18:21.947001 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7246505a-06f1-4053-a44c-f0e3770ec755-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7246505a-06f1-4053-a44c-f0e3770ec755" (UID: "7246505a-06f1-4053-a44c-f0e3770ec755"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:18:21 crc kubenswrapper[5043]: I1125 07:18:21.947590 5043 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7246505a-06f1-4053-a44c-f0e3770ec755-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:21 crc kubenswrapper[5043]: I1125 07:18:21.969960 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7246505a-06f1-4053-a44c-f0e3770ec755-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7246505a-06f1-4053-a44c-f0e3770ec755" (UID: "7246505a-06f1-4053-a44c-f0e3770ec755"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:18:22 crc kubenswrapper[5043]: I1125 07:18:22.049244 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7246505a-06f1-4053-a44c-f0e3770ec755-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:22 crc kubenswrapper[5043]: I1125 07:18:22.487142 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7246505a-06f1-4053-a44c-f0e3770ec755","Type":"ContainerDied","Data":"27a8e63b78db56843269c0075bcbd87d895365ba86483d510f38f379f6ef6927"} Nov 25 07:18:22 crc kubenswrapper[5043]: I1125 07:18:22.487189 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27a8e63b78db56843269c0075bcbd87d895365ba86483d510f38f379f6ef6927" Nov 25 07:18:22 crc kubenswrapper[5043]: I1125 07:18:22.487244 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 07:18:22 crc kubenswrapper[5043]: I1125 07:18:22.495869 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c9b762ed-9a7c-443f-ba4d-794f57349e1d","Type":"ContainerStarted","Data":"d8cae3c50b12fd9ec5a7a441c9546da87809879eebe555a693ecf66967cd2061"} Nov 25 07:18:22 crc kubenswrapper[5043]: I1125 07:18:22.495929 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c9b762ed-9a7c-443f-ba4d-794f57349e1d","Type":"ContainerStarted","Data":"7f676c6cb4ae92107879682a6ce23e0e43f7aee10ddd1ae89a4f65c2f7e15207"} Nov 25 07:18:22 crc kubenswrapper[5043]: I1125 07:18:22.517379 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.517362844 podStartE2EDuration="2.517362844s" podCreationTimestamp="2025-11-25 07:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:18:22.515169576 +0000 UTC m=+166.683365317" watchObservedRunningTime="2025-11-25 07:18:22.517362844 +0000 UTC m=+166.685558555" Nov 25 07:18:22 crc kubenswrapper[5043]: I1125 07:18:22.560649 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:22 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:22 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:22 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:22 crc kubenswrapper[5043]: I1125 07:18:22.560714 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:23 crc kubenswrapper[5043]: I1125 07:18:23.519806 5043 generic.go:334] "Generic (PLEG): container finished" podID="c9b762ed-9a7c-443f-ba4d-794f57349e1d" containerID="d8cae3c50b12fd9ec5a7a441c9546da87809879eebe555a693ecf66967cd2061" exitCode=0 Nov 25 07:18:23 crc kubenswrapper[5043]: I1125 07:18:23.519868 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c9b762ed-9a7c-443f-ba4d-794f57349e1d","Type":"ContainerDied","Data":"d8cae3c50b12fd9ec5a7a441c9546da87809879eebe555a693ecf66967cd2061"} Nov 25 07:18:23 crc kubenswrapper[5043]: I1125 07:18:23.560487 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:23 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:23 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:23 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:23 crc kubenswrapper[5043]: I1125 07:18:23.560547 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:24 crc kubenswrapper[5043]: I1125 07:18:24.560058 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:24 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:24 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:24 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:24 crc kubenswrapper[5043]: I1125 07:18:24.560123 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:24 crc kubenswrapper[5043]: I1125 07:18:24.623657 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-n7xt5" Nov 25 07:18:24 crc kubenswrapper[5043]: I1125 07:18:24.874994 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 07:18:24 crc kubenswrapper[5043]: I1125 07:18:24.993793 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9b762ed-9a7c-443f-ba4d-794f57349e1d-kube-api-access\") pod \"c9b762ed-9a7c-443f-ba4d-794f57349e1d\" (UID: \"c9b762ed-9a7c-443f-ba4d-794f57349e1d\") " Nov 25 07:18:24 crc kubenswrapper[5043]: I1125 07:18:24.993847 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9b762ed-9a7c-443f-ba4d-794f57349e1d-kubelet-dir\") pod \"c9b762ed-9a7c-443f-ba4d-794f57349e1d\" (UID: \"c9b762ed-9a7c-443f-ba4d-794f57349e1d\") " Nov 25 07:18:24 crc kubenswrapper[5043]: I1125 07:18:24.994151 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b762ed-9a7c-443f-ba4d-794f57349e1d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c9b762ed-9a7c-443f-ba4d-794f57349e1d" (UID: "c9b762ed-9a7c-443f-ba4d-794f57349e1d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:18:25 crc kubenswrapper[5043]: I1125 07:18:25.001822 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b762ed-9a7c-443f-ba4d-794f57349e1d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c9b762ed-9a7c-443f-ba4d-794f57349e1d" (UID: "c9b762ed-9a7c-443f-ba4d-794f57349e1d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:18:25 crc kubenswrapper[5043]: I1125 07:18:25.098385 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9b762ed-9a7c-443f-ba4d-794f57349e1d-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:25 crc kubenswrapper[5043]: I1125 07:18:25.098431 5043 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9b762ed-9a7c-443f-ba4d-794f57349e1d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 07:18:25 crc kubenswrapper[5043]: I1125 07:18:25.549864 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c9b762ed-9a7c-443f-ba4d-794f57349e1d","Type":"ContainerDied","Data":"7f676c6cb4ae92107879682a6ce23e0e43f7aee10ddd1ae89a4f65c2f7e15207"} Nov 25 07:18:25 crc kubenswrapper[5043]: I1125 07:18:25.549949 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f676c6cb4ae92107879682a6ce23e0e43f7aee10ddd1ae89a4f65c2f7e15207" Nov 25 07:18:25 crc kubenswrapper[5043]: I1125 07:18:25.550004 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 07:18:25 crc kubenswrapper[5043]: I1125 07:18:25.563947 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:25 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:25 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:25 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:25 crc kubenswrapper[5043]: I1125 07:18:25.564078 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:25 crc kubenswrapper[5043]: I1125 07:18:25.706864 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs\") pod \"network-metrics-daemon-xqj4m\" (UID: \"e26eab68-d56e-4c83-9888-0a866e549524\") " pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:18:25 crc kubenswrapper[5043]: I1125 07:18:25.725159 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e26eab68-d56e-4c83-9888-0a866e549524-metrics-certs\") pod \"network-metrics-daemon-xqj4m\" (UID: \"e26eab68-d56e-4c83-9888-0a866e549524\") " pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:18:25 crc kubenswrapper[5043]: I1125 07:18:25.883991 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xqj4m" Nov 25 07:18:26 crc kubenswrapper[5043]: I1125 07:18:26.559869 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:26 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:26 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:26 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:26 crc kubenswrapper[5043]: I1125 07:18:26.559927 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:27 crc kubenswrapper[5043]: I1125 07:18:27.559428 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:27 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:27 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:27 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:27 crc kubenswrapper[5043]: I1125 07:18:27.559494 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:28 crc kubenswrapper[5043]: I1125 07:18:28.509731 5043 patch_prober.go:28] interesting pod/downloads-7954f5f757-4q5v5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 25 07:18:28 crc kubenswrapper[5043]: I1125 07:18:28.509783 5043 patch_prober.go:28] interesting pod/downloads-7954f5f757-4q5v5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 25 07:18:28 crc kubenswrapper[5043]: I1125 07:18:28.509819 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4q5v5" podUID="ed1bbbdd-aa02-4472-867f-ef6f2c991728" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 25 07:18:28 crc kubenswrapper[5043]: I1125 07:18:28.509841 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4q5v5" podUID="ed1bbbdd-aa02-4472-867f-ef6f2c991728" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 25 07:18:28 crc kubenswrapper[5043]: I1125 07:18:28.561122 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:28 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:28 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:28 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:28 crc kubenswrapper[5043]: I1125 07:18:28.561193 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:29 crc kubenswrapper[5043]: I1125 07:18:29.366457 5043 patch_prober.go:28] interesting pod/console-f9d7485db-lbz4p container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 25 07:18:29 crc kubenswrapper[5043]: I1125 07:18:29.366860 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-lbz4p" podUID="b18ece39-f2f5-41f9-b2e1-79f9f880791b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 25 07:18:29 crc kubenswrapper[5043]: I1125 07:18:29.560753 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:29 crc kubenswrapper[5043]: [-]has-synced failed: reason withheld Nov 25 07:18:29 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:29 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:29 crc kubenswrapper[5043]: I1125 07:18:29.560836 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:30 crc kubenswrapper[5043]: I1125 07:18:30.560802 5043 patch_prober.go:28] interesting pod/router-default-5444994796-6b4s4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 07:18:30 crc kubenswrapper[5043]: [+]has-synced ok Nov 25 07:18:30 crc kubenswrapper[5043]: [+]process-running ok Nov 25 07:18:30 crc kubenswrapper[5043]: healthz check failed Nov 25 07:18:30 crc kubenswrapper[5043]: I1125 07:18:30.560876 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6b4s4" podUID="eab6e215-cf16-47d9-9049-9f6a0ed1239a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 07:18:31 crc kubenswrapper[5043]: I1125 07:18:31.562551 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:31 crc kubenswrapper[5043]: I1125 07:18:31.565898 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-6b4s4" Nov 25 07:18:35 crc kubenswrapper[5043]: I1125 07:18:35.094869 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 07:18:38 crc kubenswrapper[5043]: I1125 07:18:38.522005 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-4q5v5" Nov 25 07:18:38 crc kubenswrapper[5043]: I1125 07:18:38.736915 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:18:39 crc kubenswrapper[5043]: I1125 07:18:39.889933 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:39 crc kubenswrapper[5043]: I1125 07:18:39.896767 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:18:47 crc kubenswrapper[5043]: I1125 07:18:47.276112 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:18:47 crc kubenswrapper[5043]: I1125 07:18:47.276436 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:18:49 crc kubenswrapper[5043]: I1125 07:18:49.828667 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8w2c6" Nov 25 07:18:57 crc kubenswrapper[5043]: E1125 07:18:57.882656 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 07:18:57 crc kubenswrapper[5043]: E1125 07:18:57.883178 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2gpb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9dxfx_openshift-marketplace(3032faa6-654a-4e8f-b494-061c7de9688a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 07:18:57 crc kubenswrapper[5043]: E1125 07:18:57.884400 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9dxfx" podUID="3032faa6-654a-4e8f-b494-061c7de9688a" Nov 25 07:19:05 crc kubenswrapper[5043]: E1125 07:19:05.912130 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9dxfx" podUID="3032faa6-654a-4e8f-b494-061c7de9688a" Nov 25 07:19:10 crc kubenswrapper[5043]: E1125 07:19:10.458965 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 07:19:10 crc kubenswrapper[5043]: E1125 07:19:10.459564 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cs9l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-56hks_openshift-marketplace(ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 07:19:10 crc kubenswrapper[5043]: E1125 07:19:10.460937 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-56hks" podUID="ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f" Nov 25 07:19:17 crc kubenswrapper[5043]: I1125 07:19:17.276642 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:19:17 crc kubenswrapper[5043]: I1125 07:19:17.277066 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:19:17 crc kubenswrapper[5043]: I1125 07:19:17.277148 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:19:17 crc kubenswrapper[5043]: I1125 07:19:17.278110 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:19:17 crc kubenswrapper[5043]: I1125 07:19:17.278279 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf" gracePeriod=600 Nov 25 07:19:19 crc kubenswrapper[5043]: E1125 07:19:19.268355 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-56hks" podUID="ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f" Nov 25 07:19:19 crc kubenswrapper[5043]: E1125 07:19:19.377887 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 07:19:19 crc kubenswrapper[5043]: E1125 07:19:19.378280 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sj2td,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-88657_openshift-marketplace(e2d09ac1-0cde-4f72-894c-07d74837ef3c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 07:19:19 crc kubenswrapper[5043]: E1125 07:19:19.379456 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-88657" podUID="e2d09ac1-0cde-4f72-894c-07d74837ef3c" Nov 25 07:19:19 crc kubenswrapper[5043]: E1125 07:19:19.398892 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 07:19:19 crc kubenswrapper[5043]: E1125 07:19:19.399275 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87ntg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-nbpdq_openshift-marketplace(51d29444-594f-4078-a3bc-9fc83f17e4cf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 07:19:19 crc kubenswrapper[5043]: E1125 07:19:19.400485 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-nbpdq" podUID="51d29444-594f-4078-a3bc-9fc83f17e4cf" Nov 25 07:19:19 crc kubenswrapper[5043]: I1125 07:19:19.902089 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf" exitCode=0 Nov 25 07:19:19 crc kubenswrapper[5043]: I1125 07:19:19.902283 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf"} Nov 25 07:19:21 crc kubenswrapper[5043]: E1125 07:19:21.162318 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-nbpdq" podUID="51d29444-594f-4078-a3bc-9fc83f17e4cf" Nov 25 07:19:21 crc kubenswrapper[5043]: E1125 07:19:21.162339 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-88657" podUID="e2d09ac1-0cde-4f72-894c-07d74837ef3c" Nov 25 07:19:21 crc kubenswrapper[5043]: E1125 07:19:21.326398 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 07:19:21 crc kubenswrapper[5043]: E1125 07:19:21.328490 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-clf68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-xzgnv_openshift-marketplace(184c9204-08a1-4df5-9451-6902efca14e2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 07:19:21 crc kubenswrapper[5043]: E1125 07:19:21.329880 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-xzgnv" podUID="184c9204-08a1-4df5-9451-6902efca14e2" Nov 25 07:19:21 crc kubenswrapper[5043]: E1125 07:19:21.334286 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 07:19:21 crc kubenswrapper[5043]: E1125 07:19:21.334408 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5vnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-m6pf7_openshift-marketplace(3b23210f-b31d-486b-9fe0-25c8b2ed2645): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 07:19:21 crc kubenswrapper[5043]: E1125 07:19:21.335748 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-m6pf7" podUID="3b23210f-b31d-486b-9fe0-25c8b2ed2645" Nov 25 07:19:21 crc kubenswrapper[5043]: I1125 07:19:21.454129 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xqj4m"] Nov 25 07:19:21 crc kubenswrapper[5043]: I1125 07:19:21.914802 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"d7a9fbecb16e1fce85f482605fc100165adf10eff85021444c5351acd6dfb457"} Nov 25 07:19:21 crc kubenswrapper[5043]: I1125 07:19:21.916952 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9cgm" event={"ID":"bc2101f3-91d7-43e8-b118-7966b9633c1e","Type":"ContainerStarted","Data":"2bf45a3bb3fdbe066c2b165d8aab721c4ff936c4bcbc340e8e46c04a2d29272e"} Nov 25 07:19:22 crc kubenswrapper[5043]: E1125 07:19:22.640769 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6pf7" podUID="3b23210f-b31d-486b-9fe0-25c8b2ed2645" Nov 25 07:19:22 crc kubenswrapper[5043]: W1125 07:19:22.641165 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode26eab68_d56e_4c83_9888_0a866e549524.slice/crio-349c518100be5077d6409f95cc1ebd2b9b9b3971ecf727ad00e60c32c971f076 WatchSource:0}: Error finding container 349c518100be5077d6409f95cc1ebd2b9b9b3971ecf727ad00e60c32c971f076: Status 404 returned error can't find the container with id 349c518100be5077d6409f95cc1ebd2b9b9b3971ecf727ad00e60c32c971f076 Nov 25 07:19:22 crc kubenswrapper[5043]: E1125 07:19:22.641335 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-xzgnv" podUID="184c9204-08a1-4df5-9451-6902efca14e2" Nov 25 07:19:22 crc kubenswrapper[5043]: I1125 07:19:22.922756 5043 generic.go:334] "Generic (PLEG): container finished" podID="bc2101f3-91d7-43e8-b118-7966b9633c1e" containerID="2bf45a3bb3fdbe066c2b165d8aab721c4ff936c4bcbc340e8e46c04a2d29272e" exitCode=0 Nov 25 07:19:22 crc kubenswrapper[5043]: I1125 07:19:22.922826 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9cgm" event={"ID":"bc2101f3-91d7-43e8-b118-7966b9633c1e","Type":"ContainerDied","Data":"2bf45a3bb3fdbe066c2b165d8aab721c4ff936c4bcbc340e8e46c04a2d29272e"} Nov 25 07:19:22 crc kubenswrapper[5043]: I1125 07:19:22.925927 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" event={"ID":"e26eab68-d56e-4c83-9888-0a866e549524","Type":"ContainerStarted","Data":"349c518100be5077d6409f95cc1ebd2b9b9b3971ecf727ad00e60c32c971f076"} Nov 25 07:19:23 crc kubenswrapper[5043]: I1125 07:19:23.936708 5043 generic.go:334] "Generic (PLEG): container finished" podID="3032faa6-654a-4e8f-b494-061c7de9688a" containerID="1ee6b489f22484a63182b78def2d8aacbb0e875d365495a9ac91336357f3dd16" exitCode=0 Nov 25 07:19:23 crc kubenswrapper[5043]: I1125 07:19:23.936791 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dxfx" event={"ID":"3032faa6-654a-4e8f-b494-061c7de9688a","Type":"ContainerDied","Data":"1ee6b489f22484a63182b78def2d8aacbb0e875d365495a9ac91336357f3dd16"} Nov 25 07:19:23 crc kubenswrapper[5043]: I1125 07:19:23.939395 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" event={"ID":"e26eab68-d56e-4c83-9888-0a866e549524","Type":"ContainerStarted","Data":"eb88f92b0803d8b16403027b5fa46c7d7b2c1ca354f98e56f0333bf121c9d025"} Nov 25 07:19:23 crc kubenswrapper[5043]: I1125 07:19:23.941819 5043 generic.go:334] "Generic (PLEG): container finished" podID="31041d64-91fd-40b6-a970-8a0ec1fa7aff" containerID="638189e821b83e04a0e6f2d49d7d71e835fd93722f550347b7ac9009c6752b04" exitCode=0 Nov 25 07:19:23 crc kubenswrapper[5043]: I1125 07:19:23.941849 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nww2p" event={"ID":"31041d64-91fd-40b6-a970-8a0ec1fa7aff","Type":"ContainerDied","Data":"638189e821b83e04a0e6f2d49d7d71e835fd93722f550347b7ac9009c6752b04"} Nov 25 07:19:24 crc kubenswrapper[5043]: I1125 07:19:24.974760 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9cgm" event={"ID":"bc2101f3-91d7-43e8-b118-7966b9633c1e","Type":"ContainerStarted","Data":"efa46bedbe8860d98b41bdb9736ce761065fbdef3c75b96611a66e01ba4a0eea"} Nov 25 07:19:24 crc kubenswrapper[5043]: I1125 07:19:24.976421 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xqj4m" event={"ID":"e26eab68-d56e-4c83-9888-0a866e549524","Type":"ContainerStarted","Data":"0d8e601e470ebac4dcfe3eb3c30a683317bc344817c6f642c7e5e995ed35a9bc"} Nov 25 07:19:24 crc kubenswrapper[5043]: I1125 07:19:24.992254 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r9cgm" podStartSLOduration=2.08397816 podStartE2EDuration="1m5.992226484s" podCreationTimestamp="2025-11-25 07:18:19 +0000 UTC" firstStartedPulling="2025-11-25 07:18:20.42614657 +0000 UTC m=+164.594342291" lastFinishedPulling="2025-11-25 07:19:24.334394884 +0000 UTC m=+228.502590615" observedRunningTime="2025-11-25 07:19:24.991163885 +0000 UTC m=+229.159359626" watchObservedRunningTime="2025-11-25 07:19:24.992226484 +0000 UTC m=+229.160422285" Nov 25 07:19:25 crc kubenswrapper[5043]: I1125 07:19:25.980078 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dxfx" event={"ID":"3032faa6-654a-4e8f-b494-061c7de9688a","Type":"ContainerStarted","Data":"c6d399424468eadc1b28b72c10e06d574b626b04ce9cb0d6ed556bc0e4848568"} Nov 25 07:19:25 crc kubenswrapper[5043]: I1125 07:19:25.983160 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nww2p" event={"ID":"31041d64-91fd-40b6-a970-8a0ec1fa7aff","Type":"ContainerStarted","Data":"e31a1c853de42ef01e83416d6bd4512d069583a43cb7ac41bd38f00ec7e29dac"} Nov 25 07:19:26 crc kubenswrapper[5043]: I1125 07:19:26.005805 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xqj4m" podStartSLOduration=203.005779768 podStartE2EDuration="3m23.005779768s" podCreationTimestamp="2025-11-25 07:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:19:25.011200307 +0000 UTC m=+229.179396068" watchObservedRunningTime="2025-11-25 07:19:26.005779768 +0000 UTC m=+230.173975509" Nov 25 07:19:26 crc kubenswrapper[5043]: I1125 07:19:26.009493 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nww2p" podStartSLOduration=2.916831251 podStartE2EDuration="1m7.009478979s" podCreationTimestamp="2025-11-25 07:18:19 +0000 UTC" firstStartedPulling="2025-11-25 07:18:21.49722772 +0000 UTC m=+165.665423441" lastFinishedPulling="2025-11-25 07:19:25.589875438 +0000 UTC m=+229.758071169" observedRunningTime="2025-11-25 07:19:26.005002187 +0000 UTC m=+230.173197938" watchObservedRunningTime="2025-11-25 07:19:26.009478979 +0000 UTC m=+230.177674700" Nov 25 07:19:27 crc kubenswrapper[5043]: I1125 07:19:27.020646 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9dxfx" podStartSLOduration=3.657369284 podStartE2EDuration="1m11.020583787s" podCreationTimestamp="2025-11-25 07:18:16 +0000 UTC" firstStartedPulling="2025-11-25 07:18:18.378830978 +0000 UTC m=+162.547026709" lastFinishedPulling="2025-11-25 07:19:25.742045491 +0000 UTC m=+229.910241212" observedRunningTime="2025-11-25 07:19:27.018644344 +0000 UTC m=+231.186840135" watchObservedRunningTime="2025-11-25 07:19:27.020583787 +0000 UTC m=+231.188779548" Nov 25 07:19:29 crc kubenswrapper[5043]: I1125 07:19:29.801330 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r9cgm" Nov 25 07:19:29 crc kubenswrapper[5043]: I1125 07:19:29.801408 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r9cgm" Nov 25 07:19:30 crc kubenswrapper[5043]: I1125 07:19:30.215039 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nww2p" Nov 25 07:19:30 crc kubenswrapper[5043]: I1125 07:19:30.215129 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nww2p" Nov 25 07:19:34 crc kubenswrapper[5043]: I1125 07:19:34.766041 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r9cgm" podUID="bc2101f3-91d7-43e8-b118-7966b9633c1e" containerName="registry-server" probeResult="failure" output="command timed out" Nov 25 07:19:35 crc kubenswrapper[5043]: I1125 07:19:35.765806 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nww2p" podUID="31041d64-91fd-40b6-a970-8a0ec1fa7aff" containerName="registry-server" probeResult="failure" output="command timed out" Nov 25 07:19:36 crc kubenswrapper[5043]: I1125 07:19:36.603595 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9dxfx" Nov 25 07:19:36 crc kubenswrapper[5043]: I1125 07:19:36.603977 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9dxfx" Nov 25 07:19:38 crc kubenswrapper[5043]: I1125 07:19:38.631731 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9dxfx" podUID="3032faa6-654a-4e8f-b494-061c7de9688a" containerName="registry-server" probeResult="failure" output=< Nov 25 07:19:38 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 07:19:38 crc kubenswrapper[5043]: > Nov 25 07:19:40 crc kubenswrapper[5043]: I1125 07:19:40.840916 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r9cgm" podUID="bc2101f3-91d7-43e8-b118-7966b9633c1e" containerName="registry-server" probeResult="failure" output=< Nov 25 07:19:40 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 07:19:40 crc kubenswrapper[5043]: > Nov 25 07:19:41 crc kubenswrapper[5043]: I1125 07:19:41.271216 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nww2p" podUID="31041d64-91fd-40b6-a970-8a0ec1fa7aff" containerName="registry-server" probeResult="failure" output=< Nov 25 07:19:41 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 07:19:41 crc kubenswrapper[5043]: > Nov 25 07:19:43 crc kubenswrapper[5043]: I1125 07:19:43.089368 5043 generic.go:334] "Generic (PLEG): container finished" podID="ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f" containerID="e58b3b7bfbfb1d670ca7bfc5fe13f6b9c49bde9f2f5161f44c7a23dfeeb40656" exitCode=0 Nov 25 07:19:43 crc kubenswrapper[5043]: I1125 07:19:43.089455 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56hks" event={"ID":"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f","Type":"ContainerDied","Data":"e58b3b7bfbfb1d670ca7bfc5fe13f6b9c49bde9f2f5161f44c7a23dfeeb40656"} Nov 25 07:19:43 crc kubenswrapper[5043]: I1125 07:19:43.093272 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88657" event={"ID":"e2d09ac1-0cde-4f72-894c-07d74837ef3c","Type":"ContainerStarted","Data":"83f2a00ee855afd1d09ac8c2087d456f97b08fdf792149e3768a1e40640aed33"} Nov 25 07:19:43 crc kubenswrapper[5043]: I1125 07:19:43.095009 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6pf7" event={"ID":"3b23210f-b31d-486b-9fe0-25c8b2ed2645","Type":"ContainerStarted","Data":"67584be40db7ee00ec683c59b9686bff28b8273329bc2868b7b66fba4f817423"} Nov 25 07:19:44 crc kubenswrapper[5043]: I1125 07:19:44.116699 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzgnv" event={"ID":"184c9204-08a1-4df5-9451-6902efca14e2","Type":"ContainerStarted","Data":"ef097b5fdaeed258cc13a804d89a5b9ecd139556d371d4790e7c68cb9bfc95e2"} Nov 25 07:19:44 crc kubenswrapper[5043]: I1125 07:19:44.119362 5043 generic.go:334] "Generic (PLEG): container finished" podID="51d29444-594f-4078-a3bc-9fc83f17e4cf" containerID="4607679df7578f798e60f808d2aeb19bcb44fdb3622d80985a3e30fddbf367b4" exitCode=0 Nov 25 07:19:44 crc kubenswrapper[5043]: I1125 07:19:44.119451 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbpdq" event={"ID":"51d29444-594f-4078-a3bc-9fc83f17e4cf","Type":"ContainerDied","Data":"4607679df7578f798e60f808d2aeb19bcb44fdb3622d80985a3e30fddbf367b4"} Nov 25 07:19:44 crc kubenswrapper[5043]: I1125 07:19:44.133402 5043 generic.go:334] "Generic (PLEG): container finished" podID="e2d09ac1-0cde-4f72-894c-07d74837ef3c" containerID="83f2a00ee855afd1d09ac8c2087d456f97b08fdf792149e3768a1e40640aed33" exitCode=0 Nov 25 07:19:44 crc kubenswrapper[5043]: I1125 07:19:44.133766 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88657" event={"ID":"e2d09ac1-0cde-4f72-894c-07d74837ef3c","Type":"ContainerDied","Data":"83f2a00ee855afd1d09ac8c2087d456f97b08fdf792149e3768a1e40640aed33"} Nov 25 07:19:44 crc kubenswrapper[5043]: I1125 07:19:44.152095 5043 generic.go:334] "Generic (PLEG): container finished" podID="3b23210f-b31d-486b-9fe0-25c8b2ed2645" containerID="67584be40db7ee00ec683c59b9686bff28b8273329bc2868b7b66fba4f817423" exitCode=0 Nov 25 07:19:44 crc kubenswrapper[5043]: I1125 07:19:44.152175 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6pf7" event={"ID":"3b23210f-b31d-486b-9fe0-25c8b2ed2645","Type":"ContainerDied","Data":"67584be40db7ee00ec683c59b9686bff28b8273329bc2868b7b66fba4f817423"} Nov 25 07:19:45 crc kubenswrapper[5043]: I1125 07:19:45.160889 5043 generic.go:334] "Generic (PLEG): container finished" podID="184c9204-08a1-4df5-9451-6902efca14e2" containerID="ef097b5fdaeed258cc13a804d89a5b9ecd139556d371d4790e7c68cb9bfc95e2" exitCode=0 Nov 25 07:19:45 crc kubenswrapper[5043]: I1125 07:19:45.160946 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzgnv" event={"ID":"184c9204-08a1-4df5-9451-6902efca14e2","Type":"ContainerDied","Data":"ef097b5fdaeed258cc13a804d89a5b9ecd139556d371d4790e7c68cb9bfc95e2"} Nov 25 07:19:46 crc kubenswrapper[5043]: I1125 07:19:46.792426 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9dxfx" Nov 25 07:19:46 crc kubenswrapper[5043]: I1125 07:19:46.852379 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9dxfx" Nov 25 07:19:48 crc kubenswrapper[5043]: I1125 07:19:48.186861 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbpdq" event={"ID":"51d29444-594f-4078-a3bc-9fc83f17e4cf","Type":"ContainerStarted","Data":"ce682e41ccda075468c062075df1c1efc76c673f31d76f3252b8fa53f0df2f63"} Nov 25 07:19:49 crc kubenswrapper[5043]: I1125 07:19:49.228286 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nbpdq" podStartSLOduration=4.243711866 podStartE2EDuration="1m33.228267548s" podCreationTimestamp="2025-11-25 07:18:16 +0000 UTC" firstStartedPulling="2025-11-25 07:18:18.373563529 +0000 UTC m=+162.541759240" lastFinishedPulling="2025-11-25 07:19:47.358119201 +0000 UTC m=+251.526314922" observedRunningTime="2025-11-25 07:19:49.225006779 +0000 UTC m=+253.393202510" watchObservedRunningTime="2025-11-25 07:19:49.228267548 +0000 UTC m=+253.396463279" Nov 25 07:19:49 crc kubenswrapper[5043]: I1125 07:19:49.835393 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r9cgm" Nov 25 07:19:49 crc kubenswrapper[5043]: I1125 07:19:49.882804 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r9cgm" Nov 25 07:19:50 crc kubenswrapper[5043]: I1125 07:19:50.204016 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6pf7" event={"ID":"3b23210f-b31d-486b-9fe0-25c8b2ed2645","Type":"ContainerStarted","Data":"d81155e7e7f9c6490687001cfcd382ad897e2c211092eb94e34f9f86eda9b854"} Nov 25 07:19:50 crc kubenswrapper[5043]: I1125 07:19:50.226551 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m6pf7" podStartSLOduration=3.274110208 podStartE2EDuration="1m34.226524618s" podCreationTimestamp="2025-11-25 07:18:16 +0000 UTC" firstStartedPulling="2025-11-25 07:18:18.388499382 +0000 UTC m=+162.556695103" lastFinishedPulling="2025-11-25 07:19:49.340913792 +0000 UTC m=+253.509109513" observedRunningTime="2025-11-25 07:19:50.222053638 +0000 UTC m=+254.390249359" watchObservedRunningTime="2025-11-25 07:19:50.226524618 +0000 UTC m=+254.394720379" Nov 25 07:19:50 crc kubenswrapper[5043]: I1125 07:19:50.257175 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nww2p" Nov 25 07:19:50 crc kubenswrapper[5043]: I1125 07:19:50.325776 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nww2p" Nov 25 07:19:52 crc kubenswrapper[5043]: I1125 07:19:52.219106 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88657" event={"ID":"e2d09ac1-0cde-4f72-894c-07d74837ef3c","Type":"ContainerStarted","Data":"b03d95cb2e8dd66f10430ce2f87a4e49c70623986aef7ccf6cd4720cbf27b4a6"} Nov 25 07:19:52 crc kubenswrapper[5043]: I1125 07:19:52.224883 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56hks" event={"ID":"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f","Type":"ContainerStarted","Data":"42e478b69328ce7dcbf6addd63e741f8d46d100e70f1e84f478f68aa1a574c9f"} Nov 25 07:19:52 crc kubenswrapper[5043]: I1125 07:19:52.228025 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzgnv" event={"ID":"184c9204-08a1-4df5-9451-6902efca14e2","Type":"ContainerStarted","Data":"b226bb9aedb591fc3597ae44e690199f80246fd471df8b2d68759fa96de61d55"} Nov 25 07:19:52 crc kubenswrapper[5043]: I1125 07:19:52.236879 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-88657" podStartSLOduration=3.430772199 podStartE2EDuration="1m34.236859263s" podCreationTimestamp="2025-11-25 07:18:18 +0000 UTC" firstStartedPulling="2025-11-25 07:18:20.469882148 +0000 UTC m=+164.638077869" lastFinishedPulling="2025-11-25 07:19:51.275969222 +0000 UTC m=+255.444164933" observedRunningTime="2025-11-25 07:19:52.235086315 +0000 UTC m=+256.403282036" watchObservedRunningTime="2025-11-25 07:19:52.236859263 +0000 UTC m=+256.405054984" Nov 25 07:19:52 crc kubenswrapper[5043]: I1125 07:19:52.256840 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xzgnv" podStartSLOduration=4.205923728 podStartE2EDuration="1m36.256823283s" podCreationTimestamp="2025-11-25 07:18:16 +0000 UTC" firstStartedPulling="2025-11-25 07:18:19.403671685 +0000 UTC m=+163.571867396" lastFinishedPulling="2025-11-25 07:19:51.45457124 +0000 UTC m=+255.622766951" observedRunningTime="2025-11-25 07:19:52.252849575 +0000 UTC m=+256.421045306" watchObservedRunningTime="2025-11-25 07:19:52.256823283 +0000 UTC m=+256.425019004" Nov 25 07:19:52 crc kubenswrapper[5043]: I1125 07:19:52.273238 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-56hks" podStartSLOduration=3.262352015 podStartE2EDuration="1m34.273222096s" podCreationTimestamp="2025-11-25 07:18:18 +0000 UTC" firstStartedPulling="2025-11-25 07:18:20.421229061 +0000 UTC m=+164.589424782" lastFinishedPulling="2025-11-25 07:19:51.432099102 +0000 UTC m=+255.600294863" observedRunningTime="2025-11-25 07:19:52.272003953 +0000 UTC m=+256.440199694" watchObservedRunningTime="2025-11-25 07:19:52.273222096 +0000 UTC m=+256.441417817" Nov 25 07:19:52 crc kubenswrapper[5043]: I1125 07:19:52.962243 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nww2p"] Nov 25 07:19:52 crc kubenswrapper[5043]: I1125 07:19:52.962986 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nww2p" podUID="31041d64-91fd-40b6-a970-8a0ec1fa7aff" containerName="registry-server" containerID="cri-o://e31a1c853de42ef01e83416d6bd4512d069583a43cb7ac41bd38f00ec7e29dac" gracePeriod=2 Nov 25 07:19:53 crc kubenswrapper[5043]: I1125 07:19:53.236839 5043 generic.go:334] "Generic (PLEG): container finished" podID="31041d64-91fd-40b6-a970-8a0ec1fa7aff" containerID="e31a1c853de42ef01e83416d6bd4512d069583a43cb7ac41bd38f00ec7e29dac" exitCode=0 Nov 25 07:19:53 crc kubenswrapper[5043]: I1125 07:19:53.236888 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nww2p" event={"ID":"31041d64-91fd-40b6-a970-8a0ec1fa7aff","Type":"ContainerDied","Data":"e31a1c853de42ef01e83416d6bd4512d069583a43cb7ac41bd38f00ec7e29dac"} Nov 25 07:19:53 crc kubenswrapper[5043]: I1125 07:19:53.284140 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nww2p" Nov 25 07:19:53 crc kubenswrapper[5043]: I1125 07:19:53.397910 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31041d64-91fd-40b6-a970-8a0ec1fa7aff-utilities\") pod \"31041d64-91fd-40b6-a970-8a0ec1fa7aff\" (UID: \"31041d64-91fd-40b6-a970-8a0ec1fa7aff\") " Nov 25 07:19:53 crc kubenswrapper[5043]: I1125 07:19:53.398038 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdsw2\" (UniqueName: \"kubernetes.io/projected/31041d64-91fd-40b6-a970-8a0ec1fa7aff-kube-api-access-qdsw2\") pod \"31041d64-91fd-40b6-a970-8a0ec1fa7aff\" (UID: \"31041d64-91fd-40b6-a970-8a0ec1fa7aff\") " Nov 25 07:19:53 crc kubenswrapper[5043]: I1125 07:19:53.398066 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31041d64-91fd-40b6-a970-8a0ec1fa7aff-catalog-content\") pod \"31041d64-91fd-40b6-a970-8a0ec1fa7aff\" (UID: \"31041d64-91fd-40b6-a970-8a0ec1fa7aff\") " Nov 25 07:19:53 crc kubenswrapper[5043]: I1125 07:19:53.398629 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31041d64-91fd-40b6-a970-8a0ec1fa7aff-utilities" (OuterVolumeSpecName: "utilities") pod "31041d64-91fd-40b6-a970-8a0ec1fa7aff" (UID: "31041d64-91fd-40b6-a970-8a0ec1fa7aff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:19:53 crc kubenswrapper[5043]: I1125 07:19:53.404409 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31041d64-91fd-40b6-a970-8a0ec1fa7aff-kube-api-access-qdsw2" (OuterVolumeSpecName: "kube-api-access-qdsw2") pod "31041d64-91fd-40b6-a970-8a0ec1fa7aff" (UID: "31041d64-91fd-40b6-a970-8a0ec1fa7aff"). InnerVolumeSpecName "kube-api-access-qdsw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:19:53 crc kubenswrapper[5043]: I1125 07:19:53.479903 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31041d64-91fd-40b6-a970-8a0ec1fa7aff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31041d64-91fd-40b6-a970-8a0ec1fa7aff" (UID: "31041d64-91fd-40b6-a970-8a0ec1fa7aff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:19:53 crc kubenswrapper[5043]: I1125 07:19:53.499207 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31041d64-91fd-40b6-a970-8a0ec1fa7aff-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:53 crc kubenswrapper[5043]: I1125 07:19:53.499230 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdsw2\" (UniqueName: \"kubernetes.io/projected/31041d64-91fd-40b6-a970-8a0ec1fa7aff-kube-api-access-qdsw2\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:53 crc kubenswrapper[5043]: I1125 07:19:53.499243 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31041d64-91fd-40b6-a970-8a0ec1fa7aff-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:54 crc kubenswrapper[5043]: I1125 07:19:54.257273 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nww2p" event={"ID":"31041d64-91fd-40b6-a970-8a0ec1fa7aff","Type":"ContainerDied","Data":"49d2faa0a8ca834f16625a019428b316a10a182acc49c7d390eb335c6740c34f"} Nov 25 07:19:54 crc kubenswrapper[5043]: I1125 07:19:54.257577 5043 scope.go:117] "RemoveContainer" containerID="e31a1c853de42ef01e83416d6bd4512d069583a43cb7ac41bd38f00ec7e29dac" Nov 25 07:19:54 crc kubenswrapper[5043]: I1125 07:19:54.257718 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nww2p" Nov 25 07:19:54 crc kubenswrapper[5043]: I1125 07:19:54.276574 5043 scope.go:117] "RemoveContainer" containerID="638189e821b83e04a0e6f2d49d7d71e835fd93722f550347b7ac9009c6752b04" Nov 25 07:19:54 crc kubenswrapper[5043]: I1125 07:19:54.288421 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nww2p"] Nov 25 07:19:54 crc kubenswrapper[5043]: I1125 07:19:54.293562 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nww2p"] Nov 25 07:19:54 crc kubenswrapper[5043]: I1125 07:19:54.296653 5043 scope.go:117] "RemoveContainer" containerID="837a0ba578c7ded82b830e998631af21ac6f5a2aa99c898a20a51587d6340ad0" Nov 25 07:19:54 crc kubenswrapper[5043]: I1125 07:19:54.969917 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31041d64-91fd-40b6-a970-8a0ec1fa7aff" path="/var/lib/kubelet/pods/31041d64-91fd-40b6-a970-8a0ec1fa7aff/volumes" Nov 25 07:19:56 crc kubenswrapper[5043]: I1125 07:19:56.799209 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m6pf7" Nov 25 07:19:56 crc kubenswrapper[5043]: I1125 07:19:56.799543 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m6pf7" Nov 25 07:19:56 crc kubenswrapper[5043]: I1125 07:19:56.837689 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m6pf7" Nov 25 07:19:57 crc kubenswrapper[5043]: I1125 07:19:57.060194 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nbpdq" Nov 25 07:19:57 crc kubenswrapper[5043]: I1125 07:19:57.060274 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nbpdq" Nov 25 07:19:57 crc kubenswrapper[5043]: I1125 07:19:57.098266 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nbpdq" Nov 25 07:19:57 crc kubenswrapper[5043]: I1125 07:19:57.252516 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xzgnv" Nov 25 07:19:57 crc kubenswrapper[5043]: I1125 07:19:57.252554 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xzgnv" Nov 25 07:19:57 crc kubenswrapper[5043]: I1125 07:19:57.297324 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xzgnv" Nov 25 07:19:57 crc kubenswrapper[5043]: I1125 07:19:57.330164 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nbpdq" Nov 25 07:19:57 crc kubenswrapper[5043]: I1125 07:19:57.330227 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m6pf7" Nov 25 07:19:57 crc kubenswrapper[5043]: I1125 07:19:57.335734 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xzgnv" Nov 25 07:19:57 crc kubenswrapper[5043]: I1125 07:19:57.961467 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xzgnv"] Nov 25 07:19:58 crc kubenswrapper[5043]: I1125 07:19:58.738379 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j7b8d"] Nov 25 07:19:58 crc kubenswrapper[5043]: I1125 07:19:58.803356 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-56hks" Nov 25 07:19:58 crc kubenswrapper[5043]: I1125 07:19:58.803649 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-56hks" Nov 25 07:19:58 crc kubenswrapper[5043]: I1125 07:19:58.843138 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-56hks" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.238777 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-88657" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.238826 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-88657" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.289936 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xzgnv" podUID="184c9204-08a1-4df5-9451-6902efca14e2" containerName="registry-server" containerID="cri-o://b226bb9aedb591fc3597ae44e690199f80246fd471df8b2d68759fa96de61d55" gracePeriod=2 Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.299755 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-88657" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.336899 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-56hks" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.347099 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-88657" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.365071 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nbpdq"] Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.365377 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nbpdq" podUID="51d29444-594f-4078-a3bc-9fc83f17e4cf" containerName="registry-server" containerID="cri-o://ce682e41ccda075468c062075df1c1efc76c673f31d76f3252b8fa53f0df2f63" gracePeriod=2 Nov 25 07:19:59 crc kubenswrapper[5043]: E1125 07:19:59.462825 5043 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51d29444_594f_4078_a3bc_9fc83f17e4cf.slice/crio-conmon-ce682e41ccda075468c062075df1c1efc76c673f31d76f3252b8fa53f0df2f63.scope\": RecentStats: unable to find data in memory cache]" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.616923 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xzgnv" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.686194 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbpdq" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.777321 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184c9204-08a1-4df5-9451-6902efca14e2-utilities\") pod \"184c9204-08a1-4df5-9451-6902efca14e2\" (UID: \"184c9204-08a1-4df5-9451-6902efca14e2\") " Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.777416 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clf68\" (UniqueName: \"kubernetes.io/projected/184c9204-08a1-4df5-9451-6902efca14e2-kube-api-access-clf68\") pod \"184c9204-08a1-4df5-9451-6902efca14e2\" (UID: \"184c9204-08a1-4df5-9451-6902efca14e2\") " Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.777450 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184c9204-08a1-4df5-9451-6902efca14e2-catalog-content\") pod \"184c9204-08a1-4df5-9451-6902efca14e2\" (UID: \"184c9204-08a1-4df5-9451-6902efca14e2\") " Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.778628 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/184c9204-08a1-4df5-9451-6902efca14e2-utilities" (OuterVolumeSpecName: "utilities") pod "184c9204-08a1-4df5-9451-6902efca14e2" (UID: "184c9204-08a1-4df5-9451-6902efca14e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.783220 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/184c9204-08a1-4df5-9451-6902efca14e2-kube-api-access-clf68" (OuterVolumeSpecName: "kube-api-access-clf68") pod "184c9204-08a1-4df5-9451-6902efca14e2" (UID: "184c9204-08a1-4df5-9451-6902efca14e2"). InnerVolumeSpecName "kube-api-access-clf68". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.822721 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/184c9204-08a1-4df5-9451-6902efca14e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "184c9204-08a1-4df5-9451-6902efca14e2" (UID: "184c9204-08a1-4df5-9451-6902efca14e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.878896 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87ntg\" (UniqueName: \"kubernetes.io/projected/51d29444-594f-4078-a3bc-9fc83f17e4cf-kube-api-access-87ntg\") pod \"51d29444-594f-4078-a3bc-9fc83f17e4cf\" (UID: \"51d29444-594f-4078-a3bc-9fc83f17e4cf\") " Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.878992 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51d29444-594f-4078-a3bc-9fc83f17e4cf-utilities\") pod \"51d29444-594f-4078-a3bc-9fc83f17e4cf\" (UID: \"51d29444-594f-4078-a3bc-9fc83f17e4cf\") " Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.879033 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51d29444-594f-4078-a3bc-9fc83f17e4cf-catalog-content\") pod \"51d29444-594f-4078-a3bc-9fc83f17e4cf\" (UID: \"51d29444-594f-4078-a3bc-9fc83f17e4cf\") " Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.879297 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/184c9204-08a1-4df5-9451-6902efca14e2-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.879316 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clf68\" (UniqueName: \"kubernetes.io/projected/184c9204-08a1-4df5-9451-6902efca14e2-kube-api-access-clf68\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.879330 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/184c9204-08a1-4df5-9451-6902efca14e2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.879955 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51d29444-594f-4078-a3bc-9fc83f17e4cf-utilities" (OuterVolumeSpecName: "utilities") pod "51d29444-594f-4078-a3bc-9fc83f17e4cf" (UID: "51d29444-594f-4078-a3bc-9fc83f17e4cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.881439 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d29444-594f-4078-a3bc-9fc83f17e4cf-kube-api-access-87ntg" (OuterVolumeSpecName: "kube-api-access-87ntg") pod "51d29444-594f-4078-a3bc-9fc83f17e4cf" (UID: "51d29444-594f-4078-a3bc-9fc83f17e4cf"). InnerVolumeSpecName "kube-api-access-87ntg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.930434 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51d29444-594f-4078-a3bc-9fc83f17e4cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51d29444-594f-4078-a3bc-9fc83f17e4cf" (UID: "51d29444-594f-4078-a3bc-9fc83f17e4cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.980413 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51d29444-594f-4078-a3bc-9fc83f17e4cf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.980473 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87ntg\" (UniqueName: \"kubernetes.io/projected/51d29444-594f-4078-a3bc-9fc83f17e4cf-kube-api-access-87ntg\") on node \"crc\" DevicePath \"\"" Nov 25 07:19:59 crc kubenswrapper[5043]: I1125 07:19:59.980489 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51d29444-594f-4078-a3bc-9fc83f17e4cf-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.297367 5043 generic.go:334] "Generic (PLEG): container finished" podID="184c9204-08a1-4df5-9451-6902efca14e2" containerID="b226bb9aedb591fc3597ae44e690199f80246fd471df8b2d68759fa96de61d55" exitCode=0 Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.297481 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzgnv" event={"ID":"184c9204-08a1-4df5-9451-6902efca14e2","Type":"ContainerDied","Data":"b226bb9aedb591fc3597ae44e690199f80246fd471df8b2d68759fa96de61d55"} Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.297517 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xzgnv" event={"ID":"184c9204-08a1-4df5-9451-6902efca14e2","Type":"ContainerDied","Data":"1bf40cb87f4f94973bf795cc73eae4c4fba529bca39e7ec8f3d7f17c20a950f0"} Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.297541 5043 scope.go:117] "RemoveContainer" containerID="b226bb9aedb591fc3597ae44e690199f80246fd471df8b2d68759fa96de61d55" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.297699 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xzgnv" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.304746 5043 generic.go:334] "Generic (PLEG): container finished" podID="51d29444-594f-4078-a3bc-9fc83f17e4cf" containerID="ce682e41ccda075468c062075df1c1efc76c673f31d76f3252b8fa53f0df2f63" exitCode=0 Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.305472 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbpdq" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.308790 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbpdq" event={"ID":"51d29444-594f-4078-a3bc-9fc83f17e4cf","Type":"ContainerDied","Data":"ce682e41ccda075468c062075df1c1efc76c673f31d76f3252b8fa53f0df2f63"} Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.308990 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbpdq" event={"ID":"51d29444-594f-4078-a3bc-9fc83f17e4cf","Type":"ContainerDied","Data":"d7a9cc2cd3b278637da510db28fe04cff03270e13db6ec25f1f36e16498944a5"} Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.320209 5043 scope.go:117] "RemoveContainer" containerID="ef097b5fdaeed258cc13a804d89a5b9ecd139556d371d4790e7c68cb9bfc95e2" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.331306 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xzgnv"] Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.335139 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xzgnv"] Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.343075 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nbpdq"] Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.352901 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nbpdq"] Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.361527 5043 scope.go:117] "RemoveContainer" containerID="4acdd111cfa89817c8aaf4301d97cb6b78e9c548162632d0644b8cee7f02d61e" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.372409 5043 scope.go:117] "RemoveContainer" containerID="b226bb9aedb591fc3597ae44e690199f80246fd471df8b2d68759fa96de61d55" Nov 25 07:20:00 crc kubenswrapper[5043]: E1125 07:20:00.372792 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b226bb9aedb591fc3597ae44e690199f80246fd471df8b2d68759fa96de61d55\": container with ID starting with b226bb9aedb591fc3597ae44e690199f80246fd471df8b2d68759fa96de61d55 not found: ID does not exist" containerID="b226bb9aedb591fc3597ae44e690199f80246fd471df8b2d68759fa96de61d55" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.372839 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b226bb9aedb591fc3597ae44e690199f80246fd471df8b2d68759fa96de61d55"} err="failed to get container status \"b226bb9aedb591fc3597ae44e690199f80246fd471df8b2d68759fa96de61d55\": rpc error: code = NotFound desc = could not find container \"b226bb9aedb591fc3597ae44e690199f80246fd471df8b2d68759fa96de61d55\": container with ID starting with b226bb9aedb591fc3597ae44e690199f80246fd471df8b2d68759fa96de61d55 not found: ID does not exist" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.372871 5043 scope.go:117] "RemoveContainer" containerID="ef097b5fdaeed258cc13a804d89a5b9ecd139556d371d4790e7c68cb9bfc95e2" Nov 25 07:20:00 crc kubenswrapper[5043]: E1125 07:20:00.373191 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef097b5fdaeed258cc13a804d89a5b9ecd139556d371d4790e7c68cb9bfc95e2\": container with ID starting with ef097b5fdaeed258cc13a804d89a5b9ecd139556d371d4790e7c68cb9bfc95e2 not found: ID does not exist" containerID="ef097b5fdaeed258cc13a804d89a5b9ecd139556d371d4790e7c68cb9bfc95e2" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.373231 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef097b5fdaeed258cc13a804d89a5b9ecd139556d371d4790e7c68cb9bfc95e2"} err="failed to get container status \"ef097b5fdaeed258cc13a804d89a5b9ecd139556d371d4790e7c68cb9bfc95e2\": rpc error: code = NotFound desc = could not find container \"ef097b5fdaeed258cc13a804d89a5b9ecd139556d371d4790e7c68cb9bfc95e2\": container with ID starting with ef097b5fdaeed258cc13a804d89a5b9ecd139556d371d4790e7c68cb9bfc95e2 not found: ID does not exist" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.373257 5043 scope.go:117] "RemoveContainer" containerID="4acdd111cfa89817c8aaf4301d97cb6b78e9c548162632d0644b8cee7f02d61e" Nov 25 07:20:00 crc kubenswrapper[5043]: E1125 07:20:00.373541 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4acdd111cfa89817c8aaf4301d97cb6b78e9c548162632d0644b8cee7f02d61e\": container with ID starting with 4acdd111cfa89817c8aaf4301d97cb6b78e9c548162632d0644b8cee7f02d61e not found: ID does not exist" containerID="4acdd111cfa89817c8aaf4301d97cb6b78e9c548162632d0644b8cee7f02d61e" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.373570 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4acdd111cfa89817c8aaf4301d97cb6b78e9c548162632d0644b8cee7f02d61e"} err="failed to get container status \"4acdd111cfa89817c8aaf4301d97cb6b78e9c548162632d0644b8cee7f02d61e\": rpc error: code = NotFound desc = could not find container \"4acdd111cfa89817c8aaf4301d97cb6b78e9c548162632d0644b8cee7f02d61e\": container with ID starting with 4acdd111cfa89817c8aaf4301d97cb6b78e9c548162632d0644b8cee7f02d61e not found: ID does not exist" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.373586 5043 scope.go:117] "RemoveContainer" containerID="ce682e41ccda075468c062075df1c1efc76c673f31d76f3252b8fa53f0df2f63" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.385149 5043 scope.go:117] "RemoveContainer" containerID="4607679df7578f798e60f808d2aeb19bcb44fdb3622d80985a3e30fddbf367b4" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.446789 5043 scope.go:117] "RemoveContainer" containerID="dd0c32e41eef6d9abdbd3a448a70948a36bc6fd30884bb43ade1e37e6d813eba" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.458593 5043 scope.go:117] "RemoveContainer" containerID="ce682e41ccda075468c062075df1c1efc76c673f31d76f3252b8fa53f0df2f63" Nov 25 07:20:00 crc kubenswrapper[5043]: E1125 07:20:00.459035 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce682e41ccda075468c062075df1c1efc76c673f31d76f3252b8fa53f0df2f63\": container with ID starting with ce682e41ccda075468c062075df1c1efc76c673f31d76f3252b8fa53f0df2f63 not found: ID does not exist" containerID="ce682e41ccda075468c062075df1c1efc76c673f31d76f3252b8fa53f0df2f63" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.459079 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce682e41ccda075468c062075df1c1efc76c673f31d76f3252b8fa53f0df2f63"} err="failed to get container status \"ce682e41ccda075468c062075df1c1efc76c673f31d76f3252b8fa53f0df2f63\": rpc error: code = NotFound desc = could not find container \"ce682e41ccda075468c062075df1c1efc76c673f31d76f3252b8fa53f0df2f63\": container with ID starting with ce682e41ccda075468c062075df1c1efc76c673f31d76f3252b8fa53f0df2f63 not found: ID does not exist" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.459112 5043 scope.go:117] "RemoveContainer" containerID="4607679df7578f798e60f808d2aeb19bcb44fdb3622d80985a3e30fddbf367b4" Nov 25 07:20:00 crc kubenswrapper[5043]: E1125 07:20:00.459536 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4607679df7578f798e60f808d2aeb19bcb44fdb3622d80985a3e30fddbf367b4\": container with ID starting with 4607679df7578f798e60f808d2aeb19bcb44fdb3622d80985a3e30fddbf367b4 not found: ID does not exist" containerID="4607679df7578f798e60f808d2aeb19bcb44fdb3622d80985a3e30fddbf367b4" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.459581 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4607679df7578f798e60f808d2aeb19bcb44fdb3622d80985a3e30fddbf367b4"} err="failed to get container status \"4607679df7578f798e60f808d2aeb19bcb44fdb3622d80985a3e30fddbf367b4\": rpc error: code = NotFound desc = could not find container \"4607679df7578f798e60f808d2aeb19bcb44fdb3622d80985a3e30fddbf367b4\": container with ID starting with 4607679df7578f798e60f808d2aeb19bcb44fdb3622d80985a3e30fddbf367b4 not found: ID does not exist" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.459632 5043 scope.go:117] "RemoveContainer" containerID="dd0c32e41eef6d9abdbd3a448a70948a36bc6fd30884bb43ade1e37e6d813eba" Nov 25 07:20:00 crc kubenswrapper[5043]: E1125 07:20:00.459997 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd0c32e41eef6d9abdbd3a448a70948a36bc6fd30884bb43ade1e37e6d813eba\": container with ID starting with dd0c32e41eef6d9abdbd3a448a70948a36bc6fd30884bb43ade1e37e6d813eba not found: ID does not exist" containerID="dd0c32e41eef6d9abdbd3a448a70948a36bc6fd30884bb43ade1e37e6d813eba" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.460036 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd0c32e41eef6d9abdbd3a448a70948a36bc6fd30884bb43ade1e37e6d813eba"} err="failed to get container status \"dd0c32e41eef6d9abdbd3a448a70948a36bc6fd30884bb43ade1e37e6d813eba\": rpc error: code = NotFound desc = could not find container \"dd0c32e41eef6d9abdbd3a448a70948a36bc6fd30884bb43ade1e37e6d813eba\": container with ID starting with dd0c32e41eef6d9abdbd3a448a70948a36bc6fd30884bb43ade1e37e6d813eba not found: ID does not exist" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.969380 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="184c9204-08a1-4df5-9451-6902efca14e2" path="/var/lib/kubelet/pods/184c9204-08a1-4df5-9451-6902efca14e2/volumes" Nov 25 07:20:00 crc kubenswrapper[5043]: I1125 07:20:00.969966 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51d29444-594f-4078-a3bc-9fc83f17e4cf" path="/var/lib/kubelet/pods/51d29444-594f-4078-a3bc-9fc83f17e4cf/volumes" Nov 25 07:20:01 crc kubenswrapper[5043]: I1125 07:20:01.760355 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-88657"] Nov 25 07:20:01 crc kubenswrapper[5043]: I1125 07:20:01.760822 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-88657" podUID="e2d09ac1-0cde-4f72-894c-07d74837ef3c" containerName="registry-server" containerID="cri-o://b03d95cb2e8dd66f10430ce2f87a4e49c70623986aef7ccf6cd4720cbf27b4a6" gracePeriod=2 Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.073292 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-88657" Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.207791 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d09ac1-0cde-4f72-894c-07d74837ef3c-utilities\") pod \"e2d09ac1-0cde-4f72-894c-07d74837ef3c\" (UID: \"e2d09ac1-0cde-4f72-894c-07d74837ef3c\") " Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.207845 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d09ac1-0cde-4f72-894c-07d74837ef3c-catalog-content\") pod \"e2d09ac1-0cde-4f72-894c-07d74837ef3c\" (UID: \"e2d09ac1-0cde-4f72-894c-07d74837ef3c\") " Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.207887 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj2td\" (UniqueName: \"kubernetes.io/projected/e2d09ac1-0cde-4f72-894c-07d74837ef3c-kube-api-access-sj2td\") pod \"e2d09ac1-0cde-4f72-894c-07d74837ef3c\" (UID: \"e2d09ac1-0cde-4f72-894c-07d74837ef3c\") " Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.209056 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2d09ac1-0cde-4f72-894c-07d74837ef3c-utilities" (OuterVolumeSpecName: "utilities") pod "e2d09ac1-0cde-4f72-894c-07d74837ef3c" (UID: "e2d09ac1-0cde-4f72-894c-07d74837ef3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.217657 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d09ac1-0cde-4f72-894c-07d74837ef3c-kube-api-access-sj2td" (OuterVolumeSpecName: "kube-api-access-sj2td") pod "e2d09ac1-0cde-4f72-894c-07d74837ef3c" (UID: "e2d09ac1-0cde-4f72-894c-07d74837ef3c"). InnerVolumeSpecName "kube-api-access-sj2td". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.229127 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2d09ac1-0cde-4f72-894c-07d74837ef3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2d09ac1-0cde-4f72-894c-07d74837ef3c" (UID: "e2d09ac1-0cde-4f72-894c-07d74837ef3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.308974 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d09ac1-0cde-4f72-894c-07d74837ef3c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.309239 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d09ac1-0cde-4f72-894c-07d74837ef3c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.309252 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj2td\" (UniqueName: \"kubernetes.io/projected/e2d09ac1-0cde-4f72-894c-07d74837ef3c-kube-api-access-sj2td\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.319363 5043 generic.go:334] "Generic (PLEG): container finished" podID="e2d09ac1-0cde-4f72-894c-07d74837ef3c" containerID="b03d95cb2e8dd66f10430ce2f87a4e49c70623986aef7ccf6cd4720cbf27b4a6" exitCode=0 Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.319408 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88657" event={"ID":"e2d09ac1-0cde-4f72-894c-07d74837ef3c","Type":"ContainerDied","Data":"b03d95cb2e8dd66f10430ce2f87a4e49c70623986aef7ccf6cd4720cbf27b4a6"} Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.319438 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-88657" Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.319448 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88657" event={"ID":"e2d09ac1-0cde-4f72-894c-07d74837ef3c","Type":"ContainerDied","Data":"b8d05512a2dbb56966d55afee01ed1dcee7fdd1a60cfd6508dca20e48cfa267d"} Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.319468 5043 scope.go:117] "RemoveContainer" containerID="b03d95cb2e8dd66f10430ce2f87a4e49c70623986aef7ccf6cd4720cbf27b4a6" Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.339714 5043 scope.go:117] "RemoveContainer" containerID="83f2a00ee855afd1d09ac8c2087d456f97b08fdf792149e3768a1e40640aed33" Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.347276 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-88657"] Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.351365 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-88657"] Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.369653 5043 scope.go:117] "RemoveContainer" containerID="e72fb1c1b061d23388227a2995cfd6624c8deac65c8f4da3b03534165fb53403" Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.381939 5043 scope.go:117] "RemoveContainer" containerID="b03d95cb2e8dd66f10430ce2f87a4e49c70623986aef7ccf6cd4720cbf27b4a6" Nov 25 07:20:02 crc kubenswrapper[5043]: E1125 07:20:02.382271 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b03d95cb2e8dd66f10430ce2f87a4e49c70623986aef7ccf6cd4720cbf27b4a6\": container with ID starting with b03d95cb2e8dd66f10430ce2f87a4e49c70623986aef7ccf6cd4720cbf27b4a6 not found: ID does not exist" containerID="b03d95cb2e8dd66f10430ce2f87a4e49c70623986aef7ccf6cd4720cbf27b4a6" Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.382316 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03d95cb2e8dd66f10430ce2f87a4e49c70623986aef7ccf6cd4720cbf27b4a6"} err="failed to get container status \"b03d95cb2e8dd66f10430ce2f87a4e49c70623986aef7ccf6cd4720cbf27b4a6\": rpc error: code = NotFound desc = could not find container \"b03d95cb2e8dd66f10430ce2f87a4e49c70623986aef7ccf6cd4720cbf27b4a6\": container with ID starting with b03d95cb2e8dd66f10430ce2f87a4e49c70623986aef7ccf6cd4720cbf27b4a6 not found: ID does not exist" Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.382343 5043 scope.go:117] "RemoveContainer" containerID="83f2a00ee855afd1d09ac8c2087d456f97b08fdf792149e3768a1e40640aed33" Nov 25 07:20:02 crc kubenswrapper[5043]: E1125 07:20:02.382677 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83f2a00ee855afd1d09ac8c2087d456f97b08fdf792149e3768a1e40640aed33\": container with ID starting with 83f2a00ee855afd1d09ac8c2087d456f97b08fdf792149e3768a1e40640aed33 not found: ID does not exist" containerID="83f2a00ee855afd1d09ac8c2087d456f97b08fdf792149e3768a1e40640aed33" Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.382733 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83f2a00ee855afd1d09ac8c2087d456f97b08fdf792149e3768a1e40640aed33"} err="failed to get container status \"83f2a00ee855afd1d09ac8c2087d456f97b08fdf792149e3768a1e40640aed33\": rpc error: code = NotFound desc = could not find container \"83f2a00ee855afd1d09ac8c2087d456f97b08fdf792149e3768a1e40640aed33\": container with ID starting with 83f2a00ee855afd1d09ac8c2087d456f97b08fdf792149e3768a1e40640aed33 not found: ID does not exist" Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.382763 5043 scope.go:117] "RemoveContainer" containerID="e72fb1c1b061d23388227a2995cfd6624c8deac65c8f4da3b03534165fb53403" Nov 25 07:20:02 crc kubenswrapper[5043]: E1125 07:20:02.383124 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e72fb1c1b061d23388227a2995cfd6624c8deac65c8f4da3b03534165fb53403\": container with ID starting with e72fb1c1b061d23388227a2995cfd6624c8deac65c8f4da3b03534165fb53403 not found: ID does not exist" containerID="e72fb1c1b061d23388227a2995cfd6624c8deac65c8f4da3b03534165fb53403" Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.383168 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e72fb1c1b061d23388227a2995cfd6624c8deac65c8f4da3b03534165fb53403"} err="failed to get container status \"e72fb1c1b061d23388227a2995cfd6624c8deac65c8f4da3b03534165fb53403\": rpc error: code = NotFound desc = could not find container \"e72fb1c1b061d23388227a2995cfd6624c8deac65c8f4da3b03534165fb53403\": container with ID starting with e72fb1c1b061d23388227a2995cfd6624c8deac65c8f4da3b03534165fb53403 not found: ID does not exist" Nov 25 07:20:02 crc kubenswrapper[5043]: I1125 07:20:02.969244 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d09ac1-0cde-4f72-894c-07d74837ef3c" path="/var/lib/kubelet/pods/e2d09ac1-0cde-4f72-894c-07d74837ef3c/volumes" Nov 25 07:20:23 crc kubenswrapper[5043]: I1125 07:20:23.778659 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" podUID="4531175b-96a0-44e6-8cb8-c69bf9eb9d27" containerName="oauth-openshift" containerID="cri-o://ec796cf82be3ccafb1c8d6103c549d4bd8b62d49ccfca2941e5cfcf7828aab07" gracePeriod=15 Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.201534 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.253696 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk"] Nov 25 07:20:24 crc kubenswrapper[5043]: E1125 07:20:24.254069 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31041d64-91fd-40b6-a970-8a0ec1fa7aff" containerName="registry-server" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254093 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="31041d64-91fd-40b6-a970-8a0ec1fa7aff" containerName="registry-server" Nov 25 07:20:24 crc kubenswrapper[5043]: E1125 07:20:24.254116 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184c9204-08a1-4df5-9451-6902efca14e2" containerName="extract-utilities" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254132 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="184c9204-08a1-4df5-9451-6902efca14e2" containerName="extract-utilities" Nov 25 07:20:24 crc kubenswrapper[5043]: E1125 07:20:24.254153 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d09ac1-0cde-4f72-894c-07d74837ef3c" containerName="extract-utilities" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254168 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d09ac1-0cde-4f72-894c-07d74837ef3c" containerName="extract-utilities" Nov 25 07:20:24 crc kubenswrapper[5043]: E1125 07:20:24.254185 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d09ac1-0cde-4f72-894c-07d74837ef3c" containerName="extract-content" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254197 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d09ac1-0cde-4f72-894c-07d74837ef3c" containerName="extract-content" Nov 25 07:20:24 crc kubenswrapper[5043]: E1125 07:20:24.254212 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4531175b-96a0-44e6-8cb8-c69bf9eb9d27" containerName="oauth-openshift" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254226 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4531175b-96a0-44e6-8cb8-c69bf9eb9d27" containerName="oauth-openshift" Nov 25 07:20:24 crc kubenswrapper[5043]: E1125 07:20:24.254252 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184c9204-08a1-4df5-9451-6902efca14e2" containerName="extract-content" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254265 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="184c9204-08a1-4df5-9451-6902efca14e2" containerName="extract-content" Nov 25 07:20:24 crc kubenswrapper[5043]: E1125 07:20:24.254280 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b762ed-9a7c-443f-ba4d-794f57349e1d" containerName="pruner" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254293 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b762ed-9a7c-443f-ba4d-794f57349e1d" containerName="pruner" Nov 25 07:20:24 crc kubenswrapper[5043]: E1125 07:20:24.254321 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31041d64-91fd-40b6-a970-8a0ec1fa7aff" containerName="extract-utilities" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254334 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="31041d64-91fd-40b6-a970-8a0ec1fa7aff" containerName="extract-utilities" Nov 25 07:20:24 crc kubenswrapper[5043]: E1125 07:20:24.254353 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d29444-594f-4078-a3bc-9fc83f17e4cf" containerName="extract-content" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254366 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d29444-594f-4078-a3bc-9fc83f17e4cf" containerName="extract-content" Nov 25 07:20:24 crc kubenswrapper[5043]: E1125 07:20:24.254386 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d29444-594f-4078-a3bc-9fc83f17e4cf" containerName="extract-utilities" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254399 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d29444-594f-4078-a3bc-9fc83f17e4cf" containerName="extract-utilities" Nov 25 07:20:24 crc kubenswrapper[5043]: E1125 07:20:24.254420 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7246505a-06f1-4053-a44c-f0e3770ec755" containerName="pruner" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254433 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="7246505a-06f1-4053-a44c-f0e3770ec755" containerName="pruner" Nov 25 07:20:24 crc kubenswrapper[5043]: E1125 07:20:24.254451 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d09ac1-0cde-4f72-894c-07d74837ef3c" containerName="registry-server" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254464 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d09ac1-0cde-4f72-894c-07d74837ef3c" containerName="registry-server" Nov 25 07:20:24 crc kubenswrapper[5043]: E1125 07:20:24.254484 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184c9204-08a1-4df5-9451-6902efca14e2" containerName="registry-server" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254496 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="184c9204-08a1-4df5-9451-6902efca14e2" containerName="registry-server" Nov 25 07:20:24 crc kubenswrapper[5043]: E1125 07:20:24.254519 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d29444-594f-4078-a3bc-9fc83f17e4cf" containerName="registry-server" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254532 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d29444-594f-4078-a3bc-9fc83f17e4cf" containerName="registry-server" Nov 25 07:20:24 crc kubenswrapper[5043]: E1125 07:20:24.254549 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31041d64-91fd-40b6-a970-8a0ec1fa7aff" containerName="extract-content" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254561 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="31041d64-91fd-40b6-a970-8a0ec1fa7aff" containerName="extract-content" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254802 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d29444-594f-4078-a3bc-9fc83f17e4cf" containerName="registry-server" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254825 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="7246505a-06f1-4053-a44c-f0e3770ec755" containerName="pruner" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254847 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b762ed-9a7c-443f-ba4d-794f57349e1d" containerName="pruner" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254877 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d09ac1-0cde-4f72-894c-07d74837ef3c" containerName="registry-server" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254892 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="184c9204-08a1-4df5-9451-6902efca14e2" containerName="registry-server" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254909 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4531175b-96a0-44e6-8cb8-c69bf9eb9d27" containerName="oauth-openshift" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.254925 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="31041d64-91fd-40b6-a970-8a0ec1fa7aff" containerName="registry-server" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.255495 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk"] Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.255652 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.300708 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-serving-cert\") pod \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.300795 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-trusted-ca-bundle\") pod \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.300885 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-audit-policies\") pod \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.300952 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-cliconfig\") pod \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.301002 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-audit-dir\") pod \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.301051 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-provider-selection\") pod \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.301093 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-error\") pod \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.301137 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-service-ca\") pod \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.301188 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-ocp-branding-template\") pod \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.301248 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxgxl\" (UniqueName: \"kubernetes.io/projected/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-kube-api-access-fxgxl\") pod \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.301287 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-session\") pod \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.301339 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-idp-0-file-data\") pod \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.301379 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-router-certs\") pod \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.301411 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-login\") pod \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\" (UID: \"4531175b-96a0-44e6-8cb8-c69bf9eb9d27\") " Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.301714 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.301775 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.301831 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-user-template-error\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.301889 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-router-certs\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.301935 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.301978 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-service-ca\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.302027 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.302085 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-session\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.302149 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6614c275-4589-4080-8658-7b9879e68de3-audit-policies\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.302204 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-user-template-login\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.302257 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6614c275-4589-4080-8658-7b9879e68de3-audit-dir\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.302319 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.302391 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vljj\" (UniqueName: \"kubernetes.io/projected/6614c275-4589-4080-8658-7b9879e68de3-kube-api-access-6vljj\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.302451 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.302564 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "4531175b-96a0-44e6-8cb8-c69bf9eb9d27" (UID: "4531175b-96a0-44e6-8cb8-c69bf9eb9d27"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.302849 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "4531175b-96a0-44e6-8cb8-c69bf9eb9d27" (UID: "4531175b-96a0-44e6-8cb8-c69bf9eb9d27"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.303229 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "4531175b-96a0-44e6-8cb8-c69bf9eb9d27" (UID: "4531175b-96a0-44e6-8cb8-c69bf9eb9d27"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.304148 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "4531175b-96a0-44e6-8cb8-c69bf9eb9d27" (UID: "4531175b-96a0-44e6-8cb8-c69bf9eb9d27"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.304522 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "4531175b-96a0-44e6-8cb8-c69bf9eb9d27" (UID: "4531175b-96a0-44e6-8cb8-c69bf9eb9d27"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.307929 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "4531175b-96a0-44e6-8cb8-c69bf9eb9d27" (UID: "4531175b-96a0-44e6-8cb8-c69bf9eb9d27"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.308342 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "4531175b-96a0-44e6-8cb8-c69bf9eb9d27" (UID: "4531175b-96a0-44e6-8cb8-c69bf9eb9d27"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.309344 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "4531175b-96a0-44e6-8cb8-c69bf9eb9d27" (UID: "4531175b-96a0-44e6-8cb8-c69bf9eb9d27"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.309677 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "4531175b-96a0-44e6-8cb8-c69bf9eb9d27" (UID: "4531175b-96a0-44e6-8cb8-c69bf9eb9d27"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.317908 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-kube-api-access-fxgxl" (OuterVolumeSpecName: "kube-api-access-fxgxl") pod "4531175b-96a0-44e6-8cb8-c69bf9eb9d27" (UID: "4531175b-96a0-44e6-8cb8-c69bf9eb9d27"). InnerVolumeSpecName "kube-api-access-fxgxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.318584 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "4531175b-96a0-44e6-8cb8-c69bf9eb9d27" (UID: "4531175b-96a0-44e6-8cb8-c69bf9eb9d27"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.318962 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "4531175b-96a0-44e6-8cb8-c69bf9eb9d27" (UID: "4531175b-96a0-44e6-8cb8-c69bf9eb9d27"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.319356 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "4531175b-96a0-44e6-8cb8-c69bf9eb9d27" (UID: "4531175b-96a0-44e6-8cb8-c69bf9eb9d27"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.319592 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "4531175b-96a0-44e6-8cb8-c69bf9eb9d27" (UID: "4531175b-96a0-44e6-8cb8-c69bf9eb9d27"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.403596 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.403665 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.403694 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-user-template-error\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.403720 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-router-certs\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.403742 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.403758 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-service-ca\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.403782 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.403809 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-session\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.403837 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6614c275-4589-4080-8658-7b9879e68de3-audit-policies\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.403860 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-user-template-login\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.403882 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6614c275-4589-4080-8658-7b9879e68de3-audit-dir\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.403908 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.403938 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vljj\" (UniqueName: \"kubernetes.io/projected/6614c275-4589-4080-8658-7b9879e68de3-kube-api-access-6vljj\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.403973 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.404026 5043 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.404043 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.404056 5043 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.404071 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.404083 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.404095 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.404107 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.404121 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxgxl\" (UniqueName: \"kubernetes.io/projected/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-kube-api-access-fxgxl\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.404132 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.404144 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.404155 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.404169 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.404181 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.404194 5043 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4531175b-96a0-44e6-8cb8-c69bf9eb9d27-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.405364 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6614c275-4589-4080-8658-7b9879e68de3-audit-dir\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.406557 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.406789 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.406978 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6614c275-4589-4080-8658-7b9879e68de3-audit-policies\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.407275 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-service-ca\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.408124 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.408485 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-user-template-error\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.409672 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.410476 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-router-certs\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.411841 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.414317 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.422015 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-system-session\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.424967 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6614c275-4589-4080-8658-7b9879e68de3-v4-0-config-user-template-login\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.427912 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vljj\" (UniqueName: \"kubernetes.io/projected/6614c275-4589-4080-8658-7b9879e68de3-kube-api-access-6vljj\") pod \"oauth-openshift-764f9b7cd5-vs2nk\" (UID: \"6614c275-4589-4080-8658-7b9879e68de3\") " pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.441287 5043 generic.go:334] "Generic (PLEG): container finished" podID="4531175b-96a0-44e6-8cb8-c69bf9eb9d27" containerID="ec796cf82be3ccafb1c8d6103c549d4bd8b62d49ccfca2941e5cfcf7828aab07" exitCode=0 Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.441366 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" event={"ID":"4531175b-96a0-44e6-8cb8-c69bf9eb9d27","Type":"ContainerDied","Data":"ec796cf82be3ccafb1c8d6103c549d4bd8b62d49ccfca2941e5cfcf7828aab07"} Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.441399 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.441424 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-j7b8d" event={"ID":"4531175b-96a0-44e6-8cb8-c69bf9eb9d27","Type":"ContainerDied","Data":"76ca41e4848c74159602c7af9da9868dd7427df5a6d32c680291ae1fbba4142c"} Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.441466 5043 scope.go:117] "RemoveContainer" containerID="ec796cf82be3ccafb1c8d6103c549d4bd8b62d49ccfca2941e5cfcf7828aab07" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.467260 5043 scope.go:117] "RemoveContainer" containerID="ec796cf82be3ccafb1c8d6103c549d4bd8b62d49ccfca2941e5cfcf7828aab07" Nov 25 07:20:24 crc kubenswrapper[5043]: E1125 07:20:24.467957 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec796cf82be3ccafb1c8d6103c549d4bd8b62d49ccfca2941e5cfcf7828aab07\": container with ID starting with ec796cf82be3ccafb1c8d6103c549d4bd8b62d49ccfca2941e5cfcf7828aab07 not found: ID does not exist" containerID="ec796cf82be3ccafb1c8d6103c549d4bd8b62d49ccfca2941e5cfcf7828aab07" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.468014 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec796cf82be3ccafb1c8d6103c549d4bd8b62d49ccfca2941e5cfcf7828aab07"} err="failed to get container status \"ec796cf82be3ccafb1c8d6103c549d4bd8b62d49ccfca2941e5cfcf7828aab07\": rpc error: code = NotFound desc = could not find container \"ec796cf82be3ccafb1c8d6103c549d4bd8b62d49ccfca2941e5cfcf7828aab07\": container with ID starting with ec796cf82be3ccafb1c8d6103c549d4bd8b62d49ccfca2941e5cfcf7828aab07 not found: ID does not exist" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.469251 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j7b8d"] Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.472084 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j7b8d"] Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.578885 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.771186 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk"] Nov 25 07:20:24 crc kubenswrapper[5043]: I1125 07:20:24.972678 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4531175b-96a0-44e6-8cb8-c69bf9eb9d27" path="/var/lib/kubelet/pods/4531175b-96a0-44e6-8cb8-c69bf9eb9d27/volumes" Nov 25 07:20:25 crc kubenswrapper[5043]: I1125 07:20:25.451858 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" event={"ID":"6614c275-4589-4080-8658-7b9879e68de3","Type":"ContainerStarted","Data":"ce7c04f0ec5aea221486a3b3333b71956c9fc412a4a1c993294d9e9d23d7f6e7"} Nov 25 07:20:25 crc kubenswrapper[5043]: I1125 07:20:25.452757 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:25 crc kubenswrapper[5043]: I1125 07:20:25.453030 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" event={"ID":"6614c275-4589-4080-8658-7b9879e68de3","Type":"ContainerStarted","Data":"98f9c9498935983f2da684109cfa6c3cca6378cf994870d9c38f04f6a2891376"} Nov 25 07:20:25 crc kubenswrapper[5043]: I1125 07:20:25.462283 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" Nov 25 07:20:25 crc kubenswrapper[5043]: I1125 07:20:25.484792 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-764f9b7cd5-vs2nk" podStartSLOduration=27.484766901 podStartE2EDuration="27.484766901s" podCreationTimestamp="2025-11-25 07:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:20:25.483317022 +0000 UTC m=+289.651512823" watchObservedRunningTime="2025-11-25 07:20:25.484766901 +0000 UTC m=+289.652962642" Nov 25 07:20:36 crc kubenswrapper[5043]: I1125 07:20:36.779112 5043 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.678785 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m6pf7"] Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.685120 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m6pf7" podUID="3b23210f-b31d-486b-9fe0-25c8b2ed2645" containerName="registry-server" containerID="cri-o://d81155e7e7f9c6490687001cfcd382ad897e2c211092eb94e34f9f86eda9b854" gracePeriod=30 Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.690219 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9dxfx"] Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.690441 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9dxfx" podUID="3032faa6-654a-4e8f-b494-061c7de9688a" containerName="registry-server" containerID="cri-o://c6d399424468eadc1b28b72c10e06d574b626b04ce9cb0d6ed556bc0e4848568" gracePeriod=30 Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.705669 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hqtnq"] Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.705916 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" podUID="f3a13dff-3c0c-4151-9514-42c40e8bc83f" containerName="marketplace-operator" containerID="cri-o://caaf42d6054a0a832bfd7a7eb6487a1897df6360659e66a2c4cf16f8cbab3bb6" gracePeriod=30 Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.712011 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-56hks"] Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.712283 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-56hks" podUID="ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f" containerName="registry-server" containerID="cri-o://42e478b69328ce7dcbf6addd63e741f8d46d100e70f1e84f478f68aa1a574c9f" gracePeriod=30 Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.723127 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9cgm"] Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.723517 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r9cgm" podUID="bc2101f3-91d7-43e8-b118-7966b9633c1e" containerName="registry-server" containerID="cri-o://efa46bedbe8860d98b41bdb9736ce761065fbdef3c75b96611a66e01ba4a0eea" gracePeriod=30 Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.734667 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-86pjb"] Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.735785 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-86pjb" Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.742085 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-86pjb"] Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.781498 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wftwl\" (UniqueName: \"kubernetes.io/projected/b3387cfc-ac92-4b17-b153-e30513638741-kube-api-access-wftwl\") pod \"marketplace-operator-79b997595-86pjb\" (UID: \"b3387cfc-ac92-4b17-b153-e30513638741\") " pod="openshift-marketplace/marketplace-operator-79b997595-86pjb" Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.781574 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3387cfc-ac92-4b17-b153-e30513638741-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-86pjb\" (UID: \"b3387cfc-ac92-4b17-b153-e30513638741\") " pod="openshift-marketplace/marketplace-operator-79b997595-86pjb" Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.781718 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b3387cfc-ac92-4b17-b153-e30513638741-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-86pjb\" (UID: \"b3387cfc-ac92-4b17-b153-e30513638741\") " pod="openshift-marketplace/marketplace-operator-79b997595-86pjb" Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.883479 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3387cfc-ac92-4b17-b153-e30513638741-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-86pjb\" (UID: \"b3387cfc-ac92-4b17-b153-e30513638741\") " pod="openshift-marketplace/marketplace-operator-79b997595-86pjb" Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.883782 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b3387cfc-ac92-4b17-b153-e30513638741-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-86pjb\" (UID: \"b3387cfc-ac92-4b17-b153-e30513638741\") " pod="openshift-marketplace/marketplace-operator-79b997595-86pjb" Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.883821 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wftwl\" (UniqueName: \"kubernetes.io/projected/b3387cfc-ac92-4b17-b153-e30513638741-kube-api-access-wftwl\") pod \"marketplace-operator-79b997595-86pjb\" (UID: \"b3387cfc-ac92-4b17-b153-e30513638741\") " pod="openshift-marketplace/marketplace-operator-79b997595-86pjb" Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.884919 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3387cfc-ac92-4b17-b153-e30513638741-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-86pjb\" (UID: \"b3387cfc-ac92-4b17-b153-e30513638741\") " pod="openshift-marketplace/marketplace-operator-79b997595-86pjb" Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.890316 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b3387cfc-ac92-4b17-b153-e30513638741-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-86pjb\" (UID: \"b3387cfc-ac92-4b17-b153-e30513638741\") " pod="openshift-marketplace/marketplace-operator-79b997595-86pjb" Nov 25 07:20:51 crc kubenswrapper[5043]: I1125 07:20:51.900397 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wftwl\" (UniqueName: \"kubernetes.io/projected/b3387cfc-ac92-4b17-b153-e30513638741-kube-api-access-wftwl\") pod \"marketplace-operator-79b997595-86pjb\" (UID: \"b3387cfc-ac92-4b17-b153-e30513638741\") " pod="openshift-marketplace/marketplace-operator-79b997595-86pjb" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.020766 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-86pjb" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.129034 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9dxfx" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.184534 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6pf7" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.189662 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3032faa6-654a-4e8f-b494-061c7de9688a-utilities\") pod \"3032faa6-654a-4e8f-b494-061c7de9688a\" (UID: \"3032faa6-654a-4e8f-b494-061c7de9688a\") " Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.189737 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gpb9\" (UniqueName: \"kubernetes.io/projected/3032faa6-654a-4e8f-b494-061c7de9688a-kube-api-access-2gpb9\") pod \"3032faa6-654a-4e8f-b494-061c7de9688a\" (UID: \"3032faa6-654a-4e8f-b494-061c7de9688a\") " Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.189786 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3032faa6-654a-4e8f-b494-061c7de9688a-catalog-content\") pod \"3032faa6-654a-4e8f-b494-061c7de9688a\" (UID: \"3032faa6-654a-4e8f-b494-061c7de9688a\") " Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.190317 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3032faa6-654a-4e8f-b494-061c7de9688a-utilities" (OuterVolumeSpecName: "utilities") pod "3032faa6-654a-4e8f-b494-061c7de9688a" (UID: "3032faa6-654a-4e8f-b494-061c7de9688a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.195698 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3032faa6-654a-4e8f-b494-061c7de9688a-kube-api-access-2gpb9" (OuterVolumeSpecName: "kube-api-access-2gpb9") pod "3032faa6-654a-4e8f-b494-061c7de9688a" (UID: "3032faa6-654a-4e8f-b494-061c7de9688a"). InnerVolumeSpecName "kube-api-access-2gpb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.202875 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56hks" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.211320 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.219167 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9cgm" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.240636 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3032faa6-654a-4e8f-b494-061c7de9688a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3032faa6-654a-4e8f-b494-061c7de9688a" (UID: "3032faa6-654a-4e8f-b494-061c7de9688a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.275055 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-86pjb"] Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.290907 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3a13dff-3c0c-4151-9514-42c40e8bc83f-marketplace-operator-metrics\") pod \"f3a13dff-3c0c-4151-9514-42c40e8bc83f\" (UID: \"f3a13dff-3c0c-4151-9514-42c40e8bc83f\") " Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.290949 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-utilities\") pod \"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f\" (UID: \"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f\") " Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.290976 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b23210f-b31d-486b-9fe0-25c8b2ed2645-utilities\") pod \"3b23210f-b31d-486b-9fe0-25c8b2ed2645\" (UID: \"3b23210f-b31d-486b-9fe0-25c8b2ed2645\") " Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.291012 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5vnq\" (UniqueName: \"kubernetes.io/projected/3b23210f-b31d-486b-9fe0-25c8b2ed2645-kube-api-access-c5vnq\") pod \"3b23210f-b31d-486b-9fe0-25c8b2ed2645\" (UID: \"3b23210f-b31d-486b-9fe0-25c8b2ed2645\") " Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.291042 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-665kk\" (UniqueName: \"kubernetes.io/projected/bc2101f3-91d7-43e8-b118-7966b9633c1e-kube-api-access-665kk\") pod \"bc2101f3-91d7-43e8-b118-7966b9633c1e\" (UID: \"bc2101f3-91d7-43e8-b118-7966b9633c1e\") " Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.291060 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b23210f-b31d-486b-9fe0-25c8b2ed2645-catalog-content\") pod \"3b23210f-b31d-486b-9fe0-25c8b2ed2645\" (UID: \"3b23210f-b31d-486b-9fe0-25c8b2ed2645\") " Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.291081 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3a13dff-3c0c-4151-9514-42c40e8bc83f-marketplace-trusted-ca\") pod \"f3a13dff-3c0c-4151-9514-42c40e8bc83f\" (UID: \"f3a13dff-3c0c-4151-9514-42c40e8bc83f\") " Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.291097 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srvzv\" (UniqueName: \"kubernetes.io/projected/f3a13dff-3c0c-4151-9514-42c40e8bc83f-kube-api-access-srvzv\") pod \"f3a13dff-3c0c-4151-9514-42c40e8bc83f\" (UID: \"f3a13dff-3c0c-4151-9514-42c40e8bc83f\") " Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.291116 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-catalog-content\") pod \"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f\" (UID: \"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f\") " Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.291134 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc2101f3-91d7-43e8-b118-7966b9633c1e-catalog-content\") pod \"bc2101f3-91d7-43e8-b118-7966b9633c1e\" (UID: \"bc2101f3-91d7-43e8-b118-7966b9633c1e\") " Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.291163 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs9l7\" (UniqueName: \"kubernetes.io/projected/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-kube-api-access-cs9l7\") pod \"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f\" (UID: \"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f\") " Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.291195 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc2101f3-91d7-43e8-b118-7966b9633c1e-utilities\") pod \"bc2101f3-91d7-43e8-b118-7966b9633c1e\" (UID: \"bc2101f3-91d7-43e8-b118-7966b9633c1e\") " Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.291357 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gpb9\" (UniqueName: \"kubernetes.io/projected/3032faa6-654a-4e8f-b494-061c7de9688a-kube-api-access-2gpb9\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.291370 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3032faa6-654a-4e8f-b494-061c7de9688a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.291379 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3032faa6-654a-4e8f-b494-061c7de9688a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.291951 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2101f3-91d7-43e8-b118-7966b9633c1e-utilities" (OuterVolumeSpecName: "utilities") pod "bc2101f3-91d7-43e8-b118-7966b9633c1e" (UID: "bc2101f3-91d7-43e8-b118-7966b9633c1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.292849 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3a13dff-3c0c-4151-9514-42c40e8bc83f-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f3a13dff-3c0c-4151-9514-42c40e8bc83f" (UID: "f3a13dff-3c0c-4151-9514-42c40e8bc83f"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.293317 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-utilities" (OuterVolumeSpecName: "utilities") pod "ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f" (UID: "ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.293951 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b23210f-b31d-486b-9fe0-25c8b2ed2645-utilities" (OuterVolumeSpecName: "utilities") pod "3b23210f-b31d-486b-9fe0-25c8b2ed2645" (UID: "3b23210f-b31d-486b-9fe0-25c8b2ed2645"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.295819 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b23210f-b31d-486b-9fe0-25c8b2ed2645-kube-api-access-c5vnq" (OuterVolumeSpecName: "kube-api-access-c5vnq") pod "3b23210f-b31d-486b-9fe0-25c8b2ed2645" (UID: "3b23210f-b31d-486b-9fe0-25c8b2ed2645"). InnerVolumeSpecName "kube-api-access-c5vnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.296030 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a13dff-3c0c-4151-9514-42c40e8bc83f-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f3a13dff-3c0c-4151-9514-42c40e8bc83f" (UID: "f3a13dff-3c0c-4151-9514-42c40e8bc83f"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.296456 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2101f3-91d7-43e8-b118-7966b9633c1e-kube-api-access-665kk" (OuterVolumeSpecName: "kube-api-access-665kk") pod "bc2101f3-91d7-43e8-b118-7966b9633c1e" (UID: "bc2101f3-91d7-43e8-b118-7966b9633c1e"). InnerVolumeSpecName "kube-api-access-665kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.297352 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-kube-api-access-cs9l7" (OuterVolumeSpecName: "kube-api-access-cs9l7") pod "ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f" (UID: "ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f"). InnerVolumeSpecName "kube-api-access-cs9l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.298241 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a13dff-3c0c-4151-9514-42c40e8bc83f-kube-api-access-srvzv" (OuterVolumeSpecName: "kube-api-access-srvzv") pod "f3a13dff-3c0c-4151-9514-42c40e8bc83f" (UID: "f3a13dff-3c0c-4151-9514-42c40e8bc83f"). InnerVolumeSpecName "kube-api-access-srvzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.318349 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f" (UID: "ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.350691 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b23210f-b31d-486b-9fe0-25c8b2ed2645-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b23210f-b31d-486b-9fe0-25c8b2ed2645" (UID: "3b23210f-b31d-486b-9fe0-25c8b2ed2645"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.385144 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2101f3-91d7-43e8-b118-7966b9633c1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc2101f3-91d7-43e8-b118-7966b9633c1e" (UID: "bc2101f3-91d7-43e8-b118-7966b9633c1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.392986 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srvzv\" (UniqueName: \"kubernetes.io/projected/f3a13dff-3c0c-4151-9514-42c40e8bc83f-kube-api-access-srvzv\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.393019 5043 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3a13dff-3c0c-4151-9514-42c40e8bc83f-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.393033 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.393043 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc2101f3-91d7-43e8-b118-7966b9633c1e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.393054 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs9l7\" (UniqueName: \"kubernetes.io/projected/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-kube-api-access-cs9l7\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.393066 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc2101f3-91d7-43e8-b118-7966b9633c1e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.393078 5043 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3a13dff-3c0c-4151-9514-42c40e8bc83f-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.393091 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.393109 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b23210f-b31d-486b-9fe0-25c8b2ed2645-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.393118 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5vnq\" (UniqueName: \"kubernetes.io/projected/3b23210f-b31d-486b-9fe0-25c8b2ed2645-kube-api-access-c5vnq\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.393126 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-665kk\" (UniqueName: \"kubernetes.io/projected/bc2101f3-91d7-43e8-b118-7966b9633c1e-kube-api-access-665kk\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.393134 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b23210f-b31d-486b-9fe0-25c8b2ed2645-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.610925 5043 generic.go:334] "Generic (PLEG): container finished" podID="bc2101f3-91d7-43e8-b118-7966b9633c1e" containerID="efa46bedbe8860d98b41bdb9736ce761065fbdef3c75b96611a66e01ba4a0eea" exitCode=0 Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.610985 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9cgm" event={"ID":"bc2101f3-91d7-43e8-b118-7966b9633c1e","Type":"ContainerDied","Data":"efa46bedbe8860d98b41bdb9736ce761065fbdef3c75b96611a66e01ba4a0eea"} Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.611009 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9cgm" event={"ID":"bc2101f3-91d7-43e8-b118-7966b9633c1e","Type":"ContainerDied","Data":"489ab6daedeed1c774541255b417debe0ca1c0d6e68be3244a1dc9aeb76abebc"} Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.611025 5043 scope.go:117] "RemoveContainer" containerID="efa46bedbe8860d98b41bdb9736ce761065fbdef3c75b96611a66e01ba4a0eea" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.611124 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9cgm" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.621879 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-86pjb" event={"ID":"b3387cfc-ac92-4b17-b153-e30513638741","Type":"ContainerStarted","Data":"5b4aaaa7c9ae25fc3f2c207d6000d49abee46b2f941621db95392eae6331e824"} Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.621927 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-86pjb" event={"ID":"b3387cfc-ac92-4b17-b153-e30513638741","Type":"ContainerStarted","Data":"bff28d7309fad0ed400814fd0f89619ae2b8566923b8b5255a0a2bfc1cba2877"} Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.622038 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-86pjb" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.623053 5043 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-86pjb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" start-of-body= Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.623095 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-86pjb" podUID="b3387cfc-ac92-4b17-b153-e30513638741" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.625201 5043 generic.go:334] "Generic (PLEG): container finished" podID="3032faa6-654a-4e8f-b494-061c7de9688a" containerID="c6d399424468eadc1b28b72c10e06d574b626b04ce9cb0d6ed556bc0e4848568" exitCode=0 Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.625228 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9dxfx" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.625248 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dxfx" event={"ID":"3032faa6-654a-4e8f-b494-061c7de9688a","Type":"ContainerDied","Data":"c6d399424468eadc1b28b72c10e06d574b626b04ce9cb0d6ed556bc0e4848568"} Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.625594 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dxfx" event={"ID":"3032faa6-654a-4e8f-b494-061c7de9688a","Type":"ContainerDied","Data":"7534ff3cb3f76382d27c163483ea5127b44edaf9bb8a07f428c1b25462ff30c1"} Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.627295 5043 generic.go:334] "Generic (PLEG): container finished" podID="3b23210f-b31d-486b-9fe0-25c8b2ed2645" containerID="d81155e7e7f9c6490687001cfcd382ad897e2c211092eb94e34f9f86eda9b854" exitCode=0 Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.627335 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6pf7" event={"ID":"3b23210f-b31d-486b-9fe0-25c8b2ed2645","Type":"ContainerDied","Data":"d81155e7e7f9c6490687001cfcd382ad897e2c211092eb94e34f9f86eda9b854"} Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.627351 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6pf7" event={"ID":"3b23210f-b31d-486b-9fe0-25c8b2ed2645","Type":"ContainerDied","Data":"b91d02d25de14442482eb86d6116d24cee0e08bb2b6caa4e6f2f5c3bcf1f669e"} Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.627394 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6pf7" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.630730 5043 generic.go:334] "Generic (PLEG): container finished" podID="ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f" containerID="42e478b69328ce7dcbf6addd63e741f8d46d100e70f1e84f478f68aa1a574c9f" exitCode=0 Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.630845 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56hks" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.631048 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56hks" event={"ID":"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f","Type":"ContainerDied","Data":"42e478b69328ce7dcbf6addd63e741f8d46d100e70f1e84f478f68aa1a574c9f"} Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.631078 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56hks" event={"ID":"ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f","Type":"ContainerDied","Data":"5ea05f5b068e9f65d354819aa43ef0d54d8f15c8ccf3a46ec1536088f0f8a981"} Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.632185 5043 generic.go:334] "Generic (PLEG): container finished" podID="f3a13dff-3c0c-4151-9514-42c40e8bc83f" containerID="caaf42d6054a0a832bfd7a7eb6487a1897df6360659e66a2c4cf16f8cbab3bb6" exitCode=0 Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.632226 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" event={"ID":"f3a13dff-3c0c-4151-9514-42c40e8bc83f","Type":"ContainerDied","Data":"caaf42d6054a0a832bfd7a7eb6487a1897df6360659e66a2c4cf16f8cbab3bb6"} Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.632252 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" event={"ID":"f3a13dff-3c0c-4151-9514-42c40e8bc83f","Type":"ContainerDied","Data":"375c5082eca5dca8a989aec2ce8e6c23d9e68a959a6f6706b4bfa7900499c96e"} Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.632309 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hqtnq" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.634586 5043 scope.go:117] "RemoveContainer" containerID="2bf45a3bb3fdbe066c2b165d8aab721c4ff936c4bcbc340e8e46c04a2d29272e" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.668385 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-86pjb" podStartSLOduration=1.6683662460000002 podStartE2EDuration="1.668366246s" podCreationTimestamp="2025-11-25 07:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:20:52.647679209 +0000 UTC m=+316.815874930" watchObservedRunningTime="2025-11-25 07:20:52.668366246 +0000 UTC m=+316.836561977" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.671278 5043 scope.go:117] "RemoveContainer" containerID="c6f2b105b8bff9db0b9df22b029967d977683bc2a72bbdba17911089d9dc7169" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.672271 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9cgm"] Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.677637 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r9cgm"] Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.687306 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-56hks"] Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.692893 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-56hks"] Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.699763 5043 scope.go:117] "RemoveContainer" containerID="efa46bedbe8860d98b41bdb9736ce761065fbdef3c75b96611a66e01ba4a0eea" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.699766 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9dxfx"] Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.704769 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9dxfx"] Nov 25 07:20:52 crc kubenswrapper[5043]: E1125 07:20:52.711763 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efa46bedbe8860d98b41bdb9736ce761065fbdef3c75b96611a66e01ba4a0eea\": container with ID starting with efa46bedbe8860d98b41bdb9736ce761065fbdef3c75b96611a66e01ba4a0eea not found: ID does not exist" containerID="efa46bedbe8860d98b41bdb9736ce761065fbdef3c75b96611a66e01ba4a0eea" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.711817 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa46bedbe8860d98b41bdb9736ce761065fbdef3c75b96611a66e01ba4a0eea"} err="failed to get container status \"efa46bedbe8860d98b41bdb9736ce761065fbdef3c75b96611a66e01ba4a0eea\": rpc error: code = NotFound desc = could not find container \"efa46bedbe8860d98b41bdb9736ce761065fbdef3c75b96611a66e01ba4a0eea\": container with ID starting with efa46bedbe8860d98b41bdb9736ce761065fbdef3c75b96611a66e01ba4a0eea not found: ID does not exist" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.711856 5043 scope.go:117] "RemoveContainer" containerID="2bf45a3bb3fdbe066c2b165d8aab721c4ff936c4bcbc340e8e46c04a2d29272e" Nov 25 07:20:52 crc kubenswrapper[5043]: E1125 07:20:52.712822 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bf45a3bb3fdbe066c2b165d8aab721c4ff936c4bcbc340e8e46c04a2d29272e\": container with ID starting with 2bf45a3bb3fdbe066c2b165d8aab721c4ff936c4bcbc340e8e46c04a2d29272e not found: ID does not exist" containerID="2bf45a3bb3fdbe066c2b165d8aab721c4ff936c4bcbc340e8e46c04a2d29272e" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.712859 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf45a3bb3fdbe066c2b165d8aab721c4ff936c4bcbc340e8e46c04a2d29272e"} err="failed to get container status \"2bf45a3bb3fdbe066c2b165d8aab721c4ff936c4bcbc340e8e46c04a2d29272e\": rpc error: code = NotFound desc = could not find container \"2bf45a3bb3fdbe066c2b165d8aab721c4ff936c4bcbc340e8e46c04a2d29272e\": container with ID starting with 2bf45a3bb3fdbe066c2b165d8aab721c4ff936c4bcbc340e8e46c04a2d29272e not found: ID does not exist" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.712881 5043 scope.go:117] "RemoveContainer" containerID="c6f2b105b8bff9db0b9df22b029967d977683bc2a72bbdba17911089d9dc7169" Nov 25 07:20:52 crc kubenswrapper[5043]: E1125 07:20:52.714642 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6f2b105b8bff9db0b9df22b029967d977683bc2a72bbdba17911089d9dc7169\": container with ID starting with c6f2b105b8bff9db0b9df22b029967d977683bc2a72bbdba17911089d9dc7169 not found: ID does not exist" containerID="c6f2b105b8bff9db0b9df22b029967d977683bc2a72bbdba17911089d9dc7169" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.714693 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6f2b105b8bff9db0b9df22b029967d977683bc2a72bbdba17911089d9dc7169"} err="failed to get container status \"c6f2b105b8bff9db0b9df22b029967d977683bc2a72bbdba17911089d9dc7169\": rpc error: code = NotFound desc = could not find container \"c6f2b105b8bff9db0b9df22b029967d977683bc2a72bbdba17911089d9dc7169\": container with ID starting with c6f2b105b8bff9db0b9df22b029967d977683bc2a72bbdba17911089d9dc7169 not found: ID does not exist" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.714808 5043 scope.go:117] "RemoveContainer" containerID="c6d399424468eadc1b28b72c10e06d574b626b04ce9cb0d6ed556bc0e4848568" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.717119 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hqtnq"] Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.725726 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hqtnq"] Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.727205 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m6pf7"] Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.731520 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m6pf7"] Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.739016 5043 scope.go:117] "RemoveContainer" containerID="1ee6b489f22484a63182b78def2d8aacbb0e875d365495a9ac91336357f3dd16" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.754486 5043 scope.go:117] "RemoveContainer" containerID="63004c517244ba6424564ac790e1e5c6a5b5073187e863e639380213cbadddf0" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.767462 5043 scope.go:117] "RemoveContainer" containerID="c6d399424468eadc1b28b72c10e06d574b626b04ce9cb0d6ed556bc0e4848568" Nov 25 07:20:52 crc kubenswrapper[5043]: E1125 07:20:52.768138 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d399424468eadc1b28b72c10e06d574b626b04ce9cb0d6ed556bc0e4848568\": container with ID starting with c6d399424468eadc1b28b72c10e06d574b626b04ce9cb0d6ed556bc0e4848568 not found: ID does not exist" containerID="c6d399424468eadc1b28b72c10e06d574b626b04ce9cb0d6ed556bc0e4848568" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.768187 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d399424468eadc1b28b72c10e06d574b626b04ce9cb0d6ed556bc0e4848568"} err="failed to get container status \"c6d399424468eadc1b28b72c10e06d574b626b04ce9cb0d6ed556bc0e4848568\": rpc error: code = NotFound desc = could not find container \"c6d399424468eadc1b28b72c10e06d574b626b04ce9cb0d6ed556bc0e4848568\": container with ID starting with c6d399424468eadc1b28b72c10e06d574b626b04ce9cb0d6ed556bc0e4848568 not found: ID does not exist" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.768215 5043 scope.go:117] "RemoveContainer" containerID="1ee6b489f22484a63182b78def2d8aacbb0e875d365495a9ac91336357f3dd16" Nov 25 07:20:52 crc kubenswrapper[5043]: E1125 07:20:52.768538 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ee6b489f22484a63182b78def2d8aacbb0e875d365495a9ac91336357f3dd16\": container with ID starting with 1ee6b489f22484a63182b78def2d8aacbb0e875d365495a9ac91336357f3dd16 not found: ID does not exist" containerID="1ee6b489f22484a63182b78def2d8aacbb0e875d365495a9ac91336357f3dd16" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.768843 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ee6b489f22484a63182b78def2d8aacbb0e875d365495a9ac91336357f3dd16"} err="failed to get container status \"1ee6b489f22484a63182b78def2d8aacbb0e875d365495a9ac91336357f3dd16\": rpc error: code = NotFound desc = could not find container \"1ee6b489f22484a63182b78def2d8aacbb0e875d365495a9ac91336357f3dd16\": container with ID starting with 1ee6b489f22484a63182b78def2d8aacbb0e875d365495a9ac91336357f3dd16 not found: ID does not exist" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.768876 5043 scope.go:117] "RemoveContainer" containerID="63004c517244ba6424564ac790e1e5c6a5b5073187e863e639380213cbadddf0" Nov 25 07:20:52 crc kubenswrapper[5043]: E1125 07:20:52.769138 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63004c517244ba6424564ac790e1e5c6a5b5073187e863e639380213cbadddf0\": container with ID starting with 63004c517244ba6424564ac790e1e5c6a5b5073187e863e639380213cbadddf0 not found: ID does not exist" containerID="63004c517244ba6424564ac790e1e5c6a5b5073187e863e639380213cbadddf0" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.769174 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63004c517244ba6424564ac790e1e5c6a5b5073187e863e639380213cbadddf0"} err="failed to get container status \"63004c517244ba6424564ac790e1e5c6a5b5073187e863e639380213cbadddf0\": rpc error: code = NotFound desc = could not find container \"63004c517244ba6424564ac790e1e5c6a5b5073187e863e639380213cbadddf0\": container with ID starting with 63004c517244ba6424564ac790e1e5c6a5b5073187e863e639380213cbadddf0 not found: ID does not exist" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.769192 5043 scope.go:117] "RemoveContainer" containerID="d81155e7e7f9c6490687001cfcd382ad897e2c211092eb94e34f9f86eda9b854" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.781577 5043 scope.go:117] "RemoveContainer" containerID="67584be40db7ee00ec683c59b9686bff28b8273329bc2868b7b66fba4f817423" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.795597 5043 scope.go:117] "RemoveContainer" containerID="53e8c8b663a0acc09cf5e854230794cdfe9e37acf99791d918e983a38ed33662" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.818670 5043 scope.go:117] "RemoveContainer" containerID="d81155e7e7f9c6490687001cfcd382ad897e2c211092eb94e34f9f86eda9b854" Nov 25 07:20:52 crc kubenswrapper[5043]: E1125 07:20:52.819088 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d81155e7e7f9c6490687001cfcd382ad897e2c211092eb94e34f9f86eda9b854\": container with ID starting with d81155e7e7f9c6490687001cfcd382ad897e2c211092eb94e34f9f86eda9b854 not found: ID does not exist" containerID="d81155e7e7f9c6490687001cfcd382ad897e2c211092eb94e34f9f86eda9b854" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.819121 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81155e7e7f9c6490687001cfcd382ad897e2c211092eb94e34f9f86eda9b854"} err="failed to get container status \"d81155e7e7f9c6490687001cfcd382ad897e2c211092eb94e34f9f86eda9b854\": rpc error: code = NotFound desc = could not find container \"d81155e7e7f9c6490687001cfcd382ad897e2c211092eb94e34f9f86eda9b854\": container with ID starting with d81155e7e7f9c6490687001cfcd382ad897e2c211092eb94e34f9f86eda9b854 not found: ID does not exist" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.819146 5043 scope.go:117] "RemoveContainer" containerID="67584be40db7ee00ec683c59b9686bff28b8273329bc2868b7b66fba4f817423" Nov 25 07:20:52 crc kubenswrapper[5043]: E1125 07:20:52.820984 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67584be40db7ee00ec683c59b9686bff28b8273329bc2868b7b66fba4f817423\": container with ID starting with 67584be40db7ee00ec683c59b9686bff28b8273329bc2868b7b66fba4f817423 not found: ID does not exist" containerID="67584be40db7ee00ec683c59b9686bff28b8273329bc2868b7b66fba4f817423" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.821025 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67584be40db7ee00ec683c59b9686bff28b8273329bc2868b7b66fba4f817423"} err="failed to get container status \"67584be40db7ee00ec683c59b9686bff28b8273329bc2868b7b66fba4f817423\": rpc error: code = NotFound desc = could not find container \"67584be40db7ee00ec683c59b9686bff28b8273329bc2868b7b66fba4f817423\": container with ID starting with 67584be40db7ee00ec683c59b9686bff28b8273329bc2868b7b66fba4f817423 not found: ID does not exist" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.821041 5043 scope.go:117] "RemoveContainer" containerID="53e8c8b663a0acc09cf5e854230794cdfe9e37acf99791d918e983a38ed33662" Nov 25 07:20:52 crc kubenswrapper[5043]: E1125 07:20:52.821696 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53e8c8b663a0acc09cf5e854230794cdfe9e37acf99791d918e983a38ed33662\": container with ID starting with 53e8c8b663a0acc09cf5e854230794cdfe9e37acf99791d918e983a38ed33662 not found: ID does not exist" containerID="53e8c8b663a0acc09cf5e854230794cdfe9e37acf99791d918e983a38ed33662" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.821721 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53e8c8b663a0acc09cf5e854230794cdfe9e37acf99791d918e983a38ed33662"} err="failed to get container status \"53e8c8b663a0acc09cf5e854230794cdfe9e37acf99791d918e983a38ed33662\": rpc error: code = NotFound desc = could not find container \"53e8c8b663a0acc09cf5e854230794cdfe9e37acf99791d918e983a38ed33662\": container with ID starting with 53e8c8b663a0acc09cf5e854230794cdfe9e37acf99791d918e983a38ed33662 not found: ID does not exist" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.821736 5043 scope.go:117] "RemoveContainer" containerID="42e478b69328ce7dcbf6addd63e741f8d46d100e70f1e84f478f68aa1a574c9f" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.851761 5043 scope.go:117] "RemoveContainer" containerID="e58b3b7bfbfb1d670ca7bfc5fe13f6b9c49bde9f2f5161f44c7a23dfeeb40656" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.863509 5043 scope.go:117] "RemoveContainer" containerID="783730da296c84abfe826e0292c226b327278a5191f6851bef956297e4970a0c" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.891748 5043 scope.go:117] "RemoveContainer" containerID="42e478b69328ce7dcbf6addd63e741f8d46d100e70f1e84f478f68aa1a574c9f" Nov 25 07:20:52 crc kubenswrapper[5043]: E1125 07:20:52.892685 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42e478b69328ce7dcbf6addd63e741f8d46d100e70f1e84f478f68aa1a574c9f\": container with ID starting with 42e478b69328ce7dcbf6addd63e741f8d46d100e70f1e84f478f68aa1a574c9f not found: ID does not exist" containerID="42e478b69328ce7dcbf6addd63e741f8d46d100e70f1e84f478f68aa1a574c9f" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.892730 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e478b69328ce7dcbf6addd63e741f8d46d100e70f1e84f478f68aa1a574c9f"} err="failed to get container status \"42e478b69328ce7dcbf6addd63e741f8d46d100e70f1e84f478f68aa1a574c9f\": rpc error: code = NotFound desc = could not find container \"42e478b69328ce7dcbf6addd63e741f8d46d100e70f1e84f478f68aa1a574c9f\": container with ID starting with 42e478b69328ce7dcbf6addd63e741f8d46d100e70f1e84f478f68aa1a574c9f not found: ID does not exist" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.892761 5043 scope.go:117] "RemoveContainer" containerID="e58b3b7bfbfb1d670ca7bfc5fe13f6b9c49bde9f2f5161f44c7a23dfeeb40656" Nov 25 07:20:52 crc kubenswrapper[5043]: E1125 07:20:52.893114 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e58b3b7bfbfb1d670ca7bfc5fe13f6b9c49bde9f2f5161f44c7a23dfeeb40656\": container with ID starting with e58b3b7bfbfb1d670ca7bfc5fe13f6b9c49bde9f2f5161f44c7a23dfeeb40656 not found: ID does not exist" containerID="e58b3b7bfbfb1d670ca7bfc5fe13f6b9c49bde9f2f5161f44c7a23dfeeb40656" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.893153 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e58b3b7bfbfb1d670ca7bfc5fe13f6b9c49bde9f2f5161f44c7a23dfeeb40656"} err="failed to get container status \"e58b3b7bfbfb1d670ca7bfc5fe13f6b9c49bde9f2f5161f44c7a23dfeeb40656\": rpc error: code = NotFound desc = could not find container \"e58b3b7bfbfb1d670ca7bfc5fe13f6b9c49bde9f2f5161f44c7a23dfeeb40656\": container with ID starting with e58b3b7bfbfb1d670ca7bfc5fe13f6b9c49bde9f2f5161f44c7a23dfeeb40656 not found: ID does not exist" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.893202 5043 scope.go:117] "RemoveContainer" containerID="783730da296c84abfe826e0292c226b327278a5191f6851bef956297e4970a0c" Nov 25 07:20:52 crc kubenswrapper[5043]: E1125 07:20:52.893506 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"783730da296c84abfe826e0292c226b327278a5191f6851bef956297e4970a0c\": container with ID starting with 783730da296c84abfe826e0292c226b327278a5191f6851bef956297e4970a0c not found: ID does not exist" containerID="783730da296c84abfe826e0292c226b327278a5191f6851bef956297e4970a0c" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.893549 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"783730da296c84abfe826e0292c226b327278a5191f6851bef956297e4970a0c"} err="failed to get container status \"783730da296c84abfe826e0292c226b327278a5191f6851bef956297e4970a0c\": rpc error: code = NotFound desc = could not find container \"783730da296c84abfe826e0292c226b327278a5191f6851bef956297e4970a0c\": container with ID starting with 783730da296c84abfe826e0292c226b327278a5191f6851bef956297e4970a0c not found: ID does not exist" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.893587 5043 scope.go:117] "RemoveContainer" containerID="caaf42d6054a0a832bfd7a7eb6487a1897df6360659e66a2c4cf16f8cbab3bb6" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.906419 5043 scope.go:117] "RemoveContainer" containerID="caaf42d6054a0a832bfd7a7eb6487a1897df6360659e66a2c4cf16f8cbab3bb6" Nov 25 07:20:52 crc kubenswrapper[5043]: E1125 07:20:52.907430 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caaf42d6054a0a832bfd7a7eb6487a1897df6360659e66a2c4cf16f8cbab3bb6\": container with ID starting with caaf42d6054a0a832bfd7a7eb6487a1897df6360659e66a2c4cf16f8cbab3bb6 not found: ID does not exist" containerID="caaf42d6054a0a832bfd7a7eb6487a1897df6360659e66a2c4cf16f8cbab3bb6" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.907474 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caaf42d6054a0a832bfd7a7eb6487a1897df6360659e66a2c4cf16f8cbab3bb6"} err="failed to get container status \"caaf42d6054a0a832bfd7a7eb6487a1897df6360659e66a2c4cf16f8cbab3bb6\": rpc error: code = NotFound desc = could not find container \"caaf42d6054a0a832bfd7a7eb6487a1897df6360659e66a2c4cf16f8cbab3bb6\": container with ID starting with caaf42d6054a0a832bfd7a7eb6487a1897df6360659e66a2c4cf16f8cbab3bb6 not found: ID does not exist" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.970297 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3032faa6-654a-4e8f-b494-061c7de9688a" path="/var/lib/kubelet/pods/3032faa6-654a-4e8f-b494-061c7de9688a/volumes" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.970896 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b23210f-b31d-486b-9fe0-25c8b2ed2645" path="/var/lib/kubelet/pods/3b23210f-b31d-486b-9fe0-25c8b2ed2645/volumes" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.971451 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f" path="/var/lib/kubelet/pods/ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f/volumes" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.972430 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2101f3-91d7-43e8-b118-7966b9633c1e" path="/var/lib/kubelet/pods/bc2101f3-91d7-43e8-b118-7966b9633c1e/volumes" Nov 25 07:20:52 crc kubenswrapper[5043]: I1125 07:20:52.973071 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3a13dff-3c0c-4151-9514-42c40e8bc83f" path="/var/lib/kubelet/pods/f3a13dff-3c0c-4151-9514-42c40e8bc83f/volumes" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.649699 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-86pjb" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.897871 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v2t8s"] Nov 25 07:20:53 crc kubenswrapper[5043]: E1125 07:20:53.898044 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b23210f-b31d-486b-9fe0-25c8b2ed2645" containerName="extract-utilities" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898055 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b23210f-b31d-486b-9fe0-25c8b2ed2645" containerName="extract-utilities" Nov 25 07:20:53 crc kubenswrapper[5043]: E1125 07:20:53.898068 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b23210f-b31d-486b-9fe0-25c8b2ed2645" containerName="extract-content" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898074 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b23210f-b31d-486b-9fe0-25c8b2ed2645" containerName="extract-content" Nov 25 07:20:53 crc kubenswrapper[5043]: E1125 07:20:53.898081 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3032faa6-654a-4e8f-b494-061c7de9688a" containerName="registry-server" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898088 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3032faa6-654a-4e8f-b494-061c7de9688a" containerName="registry-server" Nov 25 07:20:53 crc kubenswrapper[5043]: E1125 07:20:53.898095 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f" containerName="extract-content" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898101 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f" containerName="extract-content" Nov 25 07:20:53 crc kubenswrapper[5043]: E1125 07:20:53.898109 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b23210f-b31d-486b-9fe0-25c8b2ed2645" containerName="registry-server" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898116 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b23210f-b31d-486b-9fe0-25c8b2ed2645" containerName="registry-server" Nov 25 07:20:53 crc kubenswrapper[5043]: E1125 07:20:53.898123 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2101f3-91d7-43e8-b118-7966b9633c1e" containerName="extract-content" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898128 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2101f3-91d7-43e8-b118-7966b9633c1e" containerName="extract-content" Nov 25 07:20:53 crc kubenswrapper[5043]: E1125 07:20:53.898138 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2101f3-91d7-43e8-b118-7966b9633c1e" containerName="registry-server" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898143 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2101f3-91d7-43e8-b118-7966b9633c1e" containerName="registry-server" Nov 25 07:20:53 crc kubenswrapper[5043]: E1125 07:20:53.898151 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f" containerName="registry-server" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898157 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f" containerName="registry-server" Nov 25 07:20:53 crc kubenswrapper[5043]: E1125 07:20:53.898166 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3032faa6-654a-4e8f-b494-061c7de9688a" containerName="extract-utilities" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898171 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3032faa6-654a-4e8f-b494-061c7de9688a" containerName="extract-utilities" Nov 25 07:20:53 crc kubenswrapper[5043]: E1125 07:20:53.898179 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2101f3-91d7-43e8-b118-7966b9633c1e" containerName="extract-utilities" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898184 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2101f3-91d7-43e8-b118-7966b9633c1e" containerName="extract-utilities" Nov 25 07:20:53 crc kubenswrapper[5043]: E1125 07:20:53.898192 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f" containerName="extract-utilities" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898198 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f" containerName="extract-utilities" Nov 25 07:20:53 crc kubenswrapper[5043]: E1125 07:20:53.898206 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3032faa6-654a-4e8f-b494-061c7de9688a" containerName="extract-content" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898211 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3032faa6-654a-4e8f-b494-061c7de9688a" containerName="extract-content" Nov 25 07:20:53 crc kubenswrapper[5043]: E1125 07:20:53.898221 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a13dff-3c0c-4151-9514-42c40e8bc83f" containerName="marketplace-operator" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898226 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a13dff-3c0c-4151-9514-42c40e8bc83f" containerName="marketplace-operator" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898340 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b23210f-b31d-486b-9fe0-25c8b2ed2645" containerName="registry-server" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898350 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7e3173-e1d4-4d10-9bbc-b39cbcd2ec4f" containerName="registry-server" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898361 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="3032faa6-654a-4e8f-b494-061c7de9688a" containerName="registry-server" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898370 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2101f3-91d7-43e8-b118-7966b9633c1e" containerName="registry-server" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898377 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a13dff-3c0c-4151-9514-42c40e8bc83f" containerName="marketplace-operator" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.898996 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2t8s" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.901010 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 07:20:53 crc kubenswrapper[5043]: I1125 07:20:53.943119 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2t8s"] Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.011224 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhqhv\" (UniqueName: \"kubernetes.io/projected/72da6310-6558-476b-8cb4-32e7b6983b67-kube-api-access-rhqhv\") pod \"redhat-marketplace-v2t8s\" (UID: \"72da6310-6558-476b-8cb4-32e7b6983b67\") " pod="openshift-marketplace/redhat-marketplace-v2t8s" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.011273 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72da6310-6558-476b-8cb4-32e7b6983b67-catalog-content\") pod \"redhat-marketplace-v2t8s\" (UID: \"72da6310-6558-476b-8cb4-32e7b6983b67\") " pod="openshift-marketplace/redhat-marketplace-v2t8s" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.011430 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72da6310-6558-476b-8cb4-32e7b6983b67-utilities\") pod \"redhat-marketplace-v2t8s\" (UID: \"72da6310-6558-476b-8cb4-32e7b6983b67\") " pod="openshift-marketplace/redhat-marketplace-v2t8s" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.099973 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wtzwd"] Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.101429 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wtzwd" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.103752 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.111898 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wtzwd"] Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.112583 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72da6310-6558-476b-8cb4-32e7b6983b67-utilities\") pod \"redhat-marketplace-v2t8s\" (UID: \"72da6310-6558-476b-8cb4-32e7b6983b67\") " pod="openshift-marketplace/redhat-marketplace-v2t8s" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.112652 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhqhv\" (UniqueName: \"kubernetes.io/projected/72da6310-6558-476b-8cb4-32e7b6983b67-kube-api-access-rhqhv\") pod \"redhat-marketplace-v2t8s\" (UID: \"72da6310-6558-476b-8cb4-32e7b6983b67\") " pod="openshift-marketplace/redhat-marketplace-v2t8s" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.112705 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72da6310-6558-476b-8cb4-32e7b6983b67-catalog-content\") pod \"redhat-marketplace-v2t8s\" (UID: \"72da6310-6558-476b-8cb4-32e7b6983b67\") " pod="openshift-marketplace/redhat-marketplace-v2t8s" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.113100 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72da6310-6558-476b-8cb4-32e7b6983b67-utilities\") pod \"redhat-marketplace-v2t8s\" (UID: \"72da6310-6558-476b-8cb4-32e7b6983b67\") " pod="openshift-marketplace/redhat-marketplace-v2t8s" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.113151 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72da6310-6558-476b-8cb4-32e7b6983b67-catalog-content\") pod \"redhat-marketplace-v2t8s\" (UID: \"72da6310-6558-476b-8cb4-32e7b6983b67\") " pod="openshift-marketplace/redhat-marketplace-v2t8s" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.135957 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhqhv\" (UniqueName: \"kubernetes.io/projected/72da6310-6558-476b-8cb4-32e7b6983b67-kube-api-access-rhqhv\") pod \"redhat-marketplace-v2t8s\" (UID: \"72da6310-6558-476b-8cb4-32e7b6983b67\") " pod="openshift-marketplace/redhat-marketplace-v2t8s" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.214126 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcsk9\" (UniqueName: \"kubernetes.io/projected/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-kube-api-access-rcsk9\") pod \"certified-operators-wtzwd\" (UID: \"e8c0d1a1-f41e-485e-9e10-f897895ee5f4\") " pod="openshift-marketplace/certified-operators-wtzwd" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.214264 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-catalog-content\") pod \"certified-operators-wtzwd\" (UID: \"e8c0d1a1-f41e-485e-9e10-f897895ee5f4\") " pod="openshift-marketplace/certified-operators-wtzwd" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.214298 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-utilities\") pod \"certified-operators-wtzwd\" (UID: \"e8c0d1a1-f41e-485e-9e10-f897895ee5f4\") " pod="openshift-marketplace/certified-operators-wtzwd" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.256460 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2t8s" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.315263 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-catalog-content\") pod \"certified-operators-wtzwd\" (UID: \"e8c0d1a1-f41e-485e-9e10-f897895ee5f4\") " pod="openshift-marketplace/certified-operators-wtzwd" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.315581 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-utilities\") pod \"certified-operators-wtzwd\" (UID: \"e8c0d1a1-f41e-485e-9e10-f897895ee5f4\") " pod="openshift-marketplace/certified-operators-wtzwd" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.315636 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcsk9\" (UniqueName: \"kubernetes.io/projected/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-kube-api-access-rcsk9\") pod \"certified-operators-wtzwd\" (UID: \"e8c0d1a1-f41e-485e-9e10-f897895ee5f4\") " pod="openshift-marketplace/certified-operators-wtzwd" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.315988 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-catalog-content\") pod \"certified-operators-wtzwd\" (UID: \"e8c0d1a1-f41e-485e-9e10-f897895ee5f4\") " pod="openshift-marketplace/certified-operators-wtzwd" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.316163 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-utilities\") pod \"certified-operators-wtzwd\" (UID: \"e8c0d1a1-f41e-485e-9e10-f897895ee5f4\") " pod="openshift-marketplace/certified-operators-wtzwd" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.330771 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcsk9\" (UniqueName: \"kubernetes.io/projected/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-kube-api-access-rcsk9\") pod \"certified-operators-wtzwd\" (UID: \"e8c0d1a1-f41e-485e-9e10-f897895ee5f4\") " pod="openshift-marketplace/certified-operators-wtzwd" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.426291 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wtzwd" Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.589879 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wtzwd"] Nov 25 07:20:54 crc kubenswrapper[5043]: W1125 07:20:54.599104 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8c0d1a1_f41e_485e_9e10_f897895ee5f4.slice/crio-c20aabe0d1d0d80c5f004686061d2f88ca91998859a2a587ba9d61bff81b30a2 WatchSource:0}: Error finding container c20aabe0d1d0d80c5f004686061d2f88ca91998859a2a587ba9d61bff81b30a2: Status 404 returned error can't find the container with id c20aabe0d1d0d80c5f004686061d2f88ca91998859a2a587ba9d61bff81b30a2 Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.646400 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2t8s"] Nov 25 07:20:54 crc kubenswrapper[5043]: I1125 07:20:54.658198 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtzwd" event={"ID":"e8c0d1a1-f41e-485e-9e10-f897895ee5f4","Type":"ContainerStarted","Data":"c20aabe0d1d0d80c5f004686061d2f88ca91998859a2a587ba9d61bff81b30a2"} Nov 25 07:20:55 crc kubenswrapper[5043]: I1125 07:20:55.664552 5043 generic.go:334] "Generic (PLEG): container finished" podID="72da6310-6558-476b-8cb4-32e7b6983b67" containerID="a7cc255f928c1c305a41c01af936ecec3ba3242a7ce0f88bccaed2016bee5bd2" exitCode=0 Nov 25 07:20:55 crc kubenswrapper[5043]: I1125 07:20:55.664881 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2t8s" event={"ID":"72da6310-6558-476b-8cb4-32e7b6983b67","Type":"ContainerDied","Data":"a7cc255f928c1c305a41c01af936ecec3ba3242a7ce0f88bccaed2016bee5bd2"} Nov 25 07:20:55 crc kubenswrapper[5043]: I1125 07:20:55.664939 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2t8s" event={"ID":"72da6310-6558-476b-8cb4-32e7b6983b67","Type":"ContainerStarted","Data":"d761f70fd9a0db5a32e1a1b60145bed63510fb76b06a37042cc3a2068e5831c1"} Nov 25 07:20:55 crc kubenswrapper[5043]: I1125 07:20:55.666624 5043 generic.go:334] "Generic (PLEG): container finished" podID="e8c0d1a1-f41e-485e-9e10-f897895ee5f4" containerID="209a46f188029b7886c33284f77dd52ea18eb1485b486071469004cd77fff744" exitCode=0 Nov 25 07:20:55 crc kubenswrapper[5043]: I1125 07:20:55.666650 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtzwd" event={"ID":"e8c0d1a1-f41e-485e-9e10-f897895ee5f4","Type":"ContainerDied","Data":"209a46f188029b7886c33284f77dd52ea18eb1485b486071469004cd77fff744"} Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.315597 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tm5dw"] Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.319479 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm5dw" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.340435 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.342065 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tm5dw"] Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.342860 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ccsm\" (UniqueName: \"kubernetes.io/projected/ff7e436c-9335-4145-8c68-31dd3da7d4ed-kube-api-access-8ccsm\") pod \"redhat-operators-tm5dw\" (UID: \"ff7e436c-9335-4145-8c68-31dd3da7d4ed\") " pod="openshift-marketplace/redhat-operators-tm5dw" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.342909 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff7e436c-9335-4145-8c68-31dd3da7d4ed-utilities\") pod \"redhat-operators-tm5dw\" (UID: \"ff7e436c-9335-4145-8c68-31dd3da7d4ed\") " pod="openshift-marketplace/redhat-operators-tm5dw" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.342935 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff7e436c-9335-4145-8c68-31dd3da7d4ed-catalog-content\") pod \"redhat-operators-tm5dw\" (UID: \"ff7e436c-9335-4145-8c68-31dd3da7d4ed\") " pod="openshift-marketplace/redhat-operators-tm5dw" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.444116 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff7e436c-9335-4145-8c68-31dd3da7d4ed-utilities\") pod \"redhat-operators-tm5dw\" (UID: \"ff7e436c-9335-4145-8c68-31dd3da7d4ed\") " pod="openshift-marketplace/redhat-operators-tm5dw" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.444172 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff7e436c-9335-4145-8c68-31dd3da7d4ed-catalog-content\") pod \"redhat-operators-tm5dw\" (UID: \"ff7e436c-9335-4145-8c68-31dd3da7d4ed\") " pod="openshift-marketplace/redhat-operators-tm5dw" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.444247 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ccsm\" (UniqueName: \"kubernetes.io/projected/ff7e436c-9335-4145-8c68-31dd3da7d4ed-kube-api-access-8ccsm\") pod \"redhat-operators-tm5dw\" (UID: \"ff7e436c-9335-4145-8c68-31dd3da7d4ed\") " pod="openshift-marketplace/redhat-operators-tm5dw" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.445052 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff7e436c-9335-4145-8c68-31dd3da7d4ed-utilities\") pod \"redhat-operators-tm5dw\" (UID: \"ff7e436c-9335-4145-8c68-31dd3da7d4ed\") " pod="openshift-marketplace/redhat-operators-tm5dw" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.445380 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff7e436c-9335-4145-8c68-31dd3da7d4ed-catalog-content\") pod \"redhat-operators-tm5dw\" (UID: \"ff7e436c-9335-4145-8c68-31dd3da7d4ed\") " pod="openshift-marketplace/redhat-operators-tm5dw" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.461970 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ccsm\" (UniqueName: \"kubernetes.io/projected/ff7e436c-9335-4145-8c68-31dd3da7d4ed-kube-api-access-8ccsm\") pod \"redhat-operators-tm5dw\" (UID: \"ff7e436c-9335-4145-8c68-31dd3da7d4ed\") " pod="openshift-marketplace/redhat-operators-tm5dw" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.497909 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r4dn4"] Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.499404 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4dn4" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.501971 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.506218 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r4dn4"] Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.545187 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pglw\" (UniqueName: \"kubernetes.io/projected/b55d9c72-26da-44b8-9feb-8a130596e568-kube-api-access-8pglw\") pod \"community-operators-r4dn4\" (UID: \"b55d9c72-26da-44b8-9feb-8a130596e568\") " pod="openshift-marketplace/community-operators-r4dn4" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.545260 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b55d9c72-26da-44b8-9feb-8a130596e568-catalog-content\") pod \"community-operators-r4dn4\" (UID: \"b55d9c72-26da-44b8-9feb-8a130596e568\") " pod="openshift-marketplace/community-operators-r4dn4" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.545348 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b55d9c72-26da-44b8-9feb-8a130596e568-utilities\") pod \"community-operators-r4dn4\" (UID: \"b55d9c72-26da-44b8-9feb-8a130596e568\") " pod="openshift-marketplace/community-operators-r4dn4" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.646128 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pglw\" (UniqueName: \"kubernetes.io/projected/b55d9c72-26da-44b8-9feb-8a130596e568-kube-api-access-8pglw\") pod \"community-operators-r4dn4\" (UID: \"b55d9c72-26da-44b8-9feb-8a130596e568\") " pod="openshift-marketplace/community-operators-r4dn4" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.646188 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b55d9c72-26da-44b8-9feb-8a130596e568-catalog-content\") pod \"community-operators-r4dn4\" (UID: \"b55d9c72-26da-44b8-9feb-8a130596e568\") " pod="openshift-marketplace/community-operators-r4dn4" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.646212 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b55d9c72-26da-44b8-9feb-8a130596e568-utilities\") pod \"community-operators-r4dn4\" (UID: \"b55d9c72-26da-44b8-9feb-8a130596e568\") " pod="openshift-marketplace/community-operators-r4dn4" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.646649 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b55d9c72-26da-44b8-9feb-8a130596e568-utilities\") pod \"community-operators-r4dn4\" (UID: \"b55d9c72-26da-44b8-9feb-8a130596e568\") " pod="openshift-marketplace/community-operators-r4dn4" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.647182 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b55d9c72-26da-44b8-9feb-8a130596e568-catalog-content\") pod \"community-operators-r4dn4\" (UID: \"b55d9c72-26da-44b8-9feb-8a130596e568\") " pod="openshift-marketplace/community-operators-r4dn4" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.653570 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm5dw" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.664149 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pglw\" (UniqueName: \"kubernetes.io/projected/b55d9c72-26da-44b8-9feb-8a130596e568-kube-api-access-8pglw\") pod \"community-operators-r4dn4\" (UID: \"b55d9c72-26da-44b8-9feb-8a130596e568\") " pod="openshift-marketplace/community-operators-r4dn4" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.672482 5043 generic.go:334] "Generic (PLEG): container finished" podID="72da6310-6558-476b-8cb4-32e7b6983b67" containerID="a7306a7ddb6548c24d4cfdac3829f9e052d58c0497421ff057b0ac8a06af1d5f" exitCode=0 Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.672543 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2t8s" event={"ID":"72da6310-6558-476b-8cb4-32e7b6983b67","Type":"ContainerDied","Data":"a7306a7ddb6548c24d4cfdac3829f9e052d58c0497421ff057b0ac8a06af1d5f"} Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.673755 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtzwd" event={"ID":"e8c0d1a1-f41e-485e-9e10-f897895ee5f4","Type":"ContainerStarted","Data":"d4efc7bef7a52e2d9fbb9d11da1939bd84b3a28bd0c56a3ebf8317801c94e159"} Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.813559 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4dn4" Nov 25 07:20:56 crc kubenswrapper[5043]: I1125 07:20:56.836704 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tm5dw"] Nov 25 07:20:56 crc kubenswrapper[5043]: W1125 07:20:56.843231 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff7e436c_9335_4145_8c68_31dd3da7d4ed.slice/crio-37fb37e6b0e1436ce4f0d30cfc11a46b79e0ca90c38479bef3d755fccd18175f WatchSource:0}: Error finding container 37fb37e6b0e1436ce4f0d30cfc11a46b79e0ca90c38479bef3d755fccd18175f: Status 404 returned error can't find the container with id 37fb37e6b0e1436ce4f0d30cfc11a46b79e0ca90c38479bef3d755fccd18175f Nov 25 07:20:57 crc kubenswrapper[5043]: I1125 07:20:57.194496 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r4dn4"] Nov 25 07:20:57 crc kubenswrapper[5043]: W1125 07:20:57.201748 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb55d9c72_26da_44b8_9feb_8a130596e568.slice/crio-6261a54d37dbfcdd9a4a8b735af29df7af38383c763d9c61f47c3c767276f220 WatchSource:0}: Error finding container 6261a54d37dbfcdd9a4a8b735af29df7af38383c763d9c61f47c3c767276f220: Status 404 returned error can't find the container with id 6261a54d37dbfcdd9a4a8b735af29df7af38383c763d9c61f47c3c767276f220 Nov 25 07:20:57 crc kubenswrapper[5043]: I1125 07:20:57.682792 5043 generic.go:334] "Generic (PLEG): container finished" podID="ff7e436c-9335-4145-8c68-31dd3da7d4ed" containerID="0fa480040ab1dec5de9decf16ee5fa6cc487f5b9674d9b07da53d9ea9eaf1a4e" exitCode=0 Nov 25 07:20:57 crc kubenswrapper[5043]: I1125 07:20:57.682967 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm5dw" event={"ID":"ff7e436c-9335-4145-8c68-31dd3da7d4ed","Type":"ContainerDied","Data":"0fa480040ab1dec5de9decf16ee5fa6cc487f5b9674d9b07da53d9ea9eaf1a4e"} Nov 25 07:20:57 crc kubenswrapper[5043]: I1125 07:20:57.683141 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm5dw" event={"ID":"ff7e436c-9335-4145-8c68-31dd3da7d4ed","Type":"ContainerStarted","Data":"37fb37e6b0e1436ce4f0d30cfc11a46b79e0ca90c38479bef3d755fccd18175f"} Nov 25 07:20:57 crc kubenswrapper[5043]: I1125 07:20:57.686299 5043 generic.go:334] "Generic (PLEG): container finished" podID="e8c0d1a1-f41e-485e-9e10-f897895ee5f4" containerID="d4efc7bef7a52e2d9fbb9d11da1939bd84b3a28bd0c56a3ebf8317801c94e159" exitCode=0 Nov 25 07:20:57 crc kubenswrapper[5043]: I1125 07:20:57.686387 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtzwd" event={"ID":"e8c0d1a1-f41e-485e-9e10-f897895ee5f4","Type":"ContainerDied","Data":"d4efc7bef7a52e2d9fbb9d11da1939bd84b3a28bd0c56a3ebf8317801c94e159"} Nov 25 07:20:57 crc kubenswrapper[5043]: I1125 07:20:57.690101 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2t8s" event={"ID":"72da6310-6558-476b-8cb4-32e7b6983b67","Type":"ContainerStarted","Data":"f873a2705260bf85b4ab9aa610fd907437c6affe9dd1ecfd3cc41274851c091b"} Nov 25 07:20:57 crc kubenswrapper[5043]: I1125 07:20:57.691594 5043 generic.go:334] "Generic (PLEG): container finished" podID="b55d9c72-26da-44b8-9feb-8a130596e568" containerID="68007c185cc00e04146122ec3d2471029b2c41e146671ae29663cdd1fbaa396f" exitCode=0 Nov 25 07:20:57 crc kubenswrapper[5043]: I1125 07:20:57.691634 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4dn4" event={"ID":"b55d9c72-26da-44b8-9feb-8a130596e568","Type":"ContainerDied","Data":"68007c185cc00e04146122ec3d2471029b2c41e146671ae29663cdd1fbaa396f"} Nov 25 07:20:57 crc kubenswrapper[5043]: I1125 07:20:57.691648 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4dn4" event={"ID":"b55d9c72-26da-44b8-9feb-8a130596e568","Type":"ContainerStarted","Data":"6261a54d37dbfcdd9a4a8b735af29df7af38383c763d9c61f47c3c767276f220"} Nov 25 07:20:57 crc kubenswrapper[5043]: I1125 07:20:57.742833 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v2t8s" podStartSLOduration=3.326068212 podStartE2EDuration="4.742801443s" podCreationTimestamp="2025-11-25 07:20:53 +0000 UTC" firstStartedPulling="2025-11-25 07:20:55.666959839 +0000 UTC m=+319.835155560" lastFinishedPulling="2025-11-25 07:20:57.08369307 +0000 UTC m=+321.251888791" observedRunningTime="2025-11-25 07:20:57.738103746 +0000 UTC m=+321.906299467" watchObservedRunningTime="2025-11-25 07:20:57.742801443 +0000 UTC m=+321.910997204" Nov 25 07:20:58 crc kubenswrapper[5043]: I1125 07:20:58.697885 5043 generic.go:334] "Generic (PLEG): container finished" podID="b55d9c72-26da-44b8-9feb-8a130596e568" containerID="c3d407a6dd9def2a149ff57ec8b25a5b92e69a10ede2b62d4dd9c680fcd6768c" exitCode=0 Nov 25 07:20:58 crc kubenswrapper[5043]: I1125 07:20:58.698073 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4dn4" event={"ID":"b55d9c72-26da-44b8-9feb-8a130596e568","Type":"ContainerDied","Data":"c3d407a6dd9def2a149ff57ec8b25a5b92e69a10ede2b62d4dd9c680fcd6768c"} Nov 25 07:20:58 crc kubenswrapper[5043]: I1125 07:20:58.703962 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtzwd" event={"ID":"e8c0d1a1-f41e-485e-9e10-f897895ee5f4","Type":"ContainerStarted","Data":"90bd7d88733ed5b97439021a66792258008dfbef5e71f3243e9986efabcdb2f2"} Nov 25 07:20:59 crc kubenswrapper[5043]: I1125 07:20:59.709757 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4dn4" event={"ID":"b55d9c72-26da-44b8-9feb-8a130596e568","Type":"ContainerStarted","Data":"21610d1b342225282b8db352fed7d8fe60dbee6006cb819a7b988c296c342209"} Nov 25 07:20:59 crc kubenswrapper[5043]: I1125 07:20:59.711145 5043 generic.go:334] "Generic (PLEG): container finished" podID="ff7e436c-9335-4145-8c68-31dd3da7d4ed" containerID="ae82ac2149d8b5b3fbdf55cfcdb67fe1e8779090827628ae69245a665e7f33ca" exitCode=0 Nov 25 07:20:59 crc kubenswrapper[5043]: I1125 07:20:59.711261 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm5dw" event={"ID":"ff7e436c-9335-4145-8c68-31dd3da7d4ed","Type":"ContainerDied","Data":"ae82ac2149d8b5b3fbdf55cfcdb67fe1e8779090827628ae69245a665e7f33ca"} Nov 25 07:20:59 crc kubenswrapper[5043]: I1125 07:20:59.727920 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r4dn4" podStartSLOduration=2.342396432 podStartE2EDuration="3.727902402s" podCreationTimestamp="2025-11-25 07:20:56 +0000 UTC" firstStartedPulling="2025-11-25 07:20:57.698442798 +0000 UTC m=+321.866638549" lastFinishedPulling="2025-11-25 07:20:59.083948788 +0000 UTC m=+323.252144519" observedRunningTime="2025-11-25 07:20:59.727131311 +0000 UTC m=+323.895327042" watchObservedRunningTime="2025-11-25 07:20:59.727902402 +0000 UTC m=+323.896098123" Nov 25 07:20:59 crc kubenswrapper[5043]: I1125 07:20:59.728565 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wtzwd" podStartSLOduration=3.059410255 podStartE2EDuration="5.72856178s" podCreationTimestamp="2025-11-25 07:20:54 +0000 UTC" firstStartedPulling="2025-11-25 07:20:55.668533311 +0000 UTC m=+319.836729032" lastFinishedPulling="2025-11-25 07:20:58.337684836 +0000 UTC m=+322.505880557" observedRunningTime="2025-11-25 07:20:58.745901257 +0000 UTC m=+322.914096978" watchObservedRunningTime="2025-11-25 07:20:59.72856178 +0000 UTC m=+323.896757491" Nov 25 07:21:01 crc kubenswrapper[5043]: I1125 07:21:01.727454 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm5dw" event={"ID":"ff7e436c-9335-4145-8c68-31dd3da7d4ed","Type":"ContainerStarted","Data":"e7ea5db32aa3af510bf7de68e8cdb640027c642c39b5eaa62dc7ab7fa8916cd3"} Nov 25 07:21:04 crc kubenswrapper[5043]: I1125 07:21:04.257018 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v2t8s" Nov 25 07:21:04 crc kubenswrapper[5043]: I1125 07:21:04.258321 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v2t8s" Nov 25 07:21:04 crc kubenswrapper[5043]: I1125 07:21:04.310029 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v2t8s" Nov 25 07:21:04 crc kubenswrapper[5043]: I1125 07:21:04.325477 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tm5dw" podStartSLOduration=5.883255553 podStartE2EDuration="8.325450211s" podCreationTimestamp="2025-11-25 07:20:56 +0000 UTC" firstStartedPulling="2025-11-25 07:20:57.685005385 +0000 UTC m=+321.853201116" lastFinishedPulling="2025-11-25 07:21:00.127200053 +0000 UTC m=+324.295395774" observedRunningTime="2025-11-25 07:21:01.749308149 +0000 UTC m=+325.917503870" watchObservedRunningTime="2025-11-25 07:21:04.325450211 +0000 UTC m=+328.493645942" Nov 25 07:21:04 crc kubenswrapper[5043]: I1125 07:21:04.427498 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wtzwd" Nov 25 07:21:04 crc kubenswrapper[5043]: I1125 07:21:04.427566 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wtzwd" Nov 25 07:21:04 crc kubenswrapper[5043]: I1125 07:21:04.473349 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wtzwd" Nov 25 07:21:04 crc kubenswrapper[5043]: I1125 07:21:04.776577 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v2t8s" Nov 25 07:21:04 crc kubenswrapper[5043]: I1125 07:21:04.780385 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wtzwd" Nov 25 07:21:06 crc kubenswrapper[5043]: I1125 07:21:06.654662 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tm5dw" Nov 25 07:21:06 crc kubenswrapper[5043]: I1125 07:21:06.654968 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tm5dw" Nov 25 07:21:06 crc kubenswrapper[5043]: I1125 07:21:06.697947 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tm5dw" Nov 25 07:21:06 crc kubenswrapper[5043]: I1125 07:21:06.785728 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tm5dw" Nov 25 07:21:06 crc kubenswrapper[5043]: I1125 07:21:06.814665 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r4dn4" Nov 25 07:21:06 crc kubenswrapper[5043]: I1125 07:21:06.815380 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r4dn4" Nov 25 07:21:06 crc kubenswrapper[5043]: I1125 07:21:06.854735 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r4dn4" Nov 25 07:21:07 crc kubenswrapper[5043]: I1125 07:21:07.811393 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r4dn4" Nov 25 07:21:47 crc kubenswrapper[5043]: I1125 07:21:47.276942 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:21:47 crc kubenswrapper[5043]: I1125 07:21:47.277844 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:22:17 crc kubenswrapper[5043]: I1125 07:22:17.275679 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:22:17 crc kubenswrapper[5043]: I1125 07:22:17.276290 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:22:31 crc kubenswrapper[5043]: I1125 07:22:31.822019 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2srnk"] Nov 25 07:22:31 crc kubenswrapper[5043]: I1125 07:22:31.824311 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:31 crc kubenswrapper[5043]: I1125 07:22:31.837216 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2srnk"] Nov 25 07:22:31 crc kubenswrapper[5043]: I1125 07:22:31.934372 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/26ed7ca5-82af-4166-88d0-e0c8021a3a94-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:31 crc kubenswrapper[5043]: I1125 07:22:31.934675 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26ed7ca5-82af-4166-88d0-e0c8021a3a94-trusted-ca\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:31 crc kubenswrapper[5043]: I1125 07:22:31.934778 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:31 crc kubenswrapper[5043]: I1125 07:22:31.934887 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/26ed7ca5-82af-4166-88d0-e0c8021a3a94-registry-tls\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:31 crc kubenswrapper[5043]: I1125 07:22:31.935007 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/26ed7ca5-82af-4166-88d0-e0c8021a3a94-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:31 crc kubenswrapper[5043]: I1125 07:22:31.935111 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/26ed7ca5-82af-4166-88d0-e0c8021a3a94-registry-certificates\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:31 crc kubenswrapper[5043]: I1125 07:22:31.935323 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr2fd\" (UniqueName: \"kubernetes.io/projected/26ed7ca5-82af-4166-88d0-e0c8021a3a94-kube-api-access-xr2fd\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:31 crc kubenswrapper[5043]: I1125 07:22:31.935374 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26ed7ca5-82af-4166-88d0-e0c8021a3a94-bound-sa-token\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:31 crc kubenswrapper[5043]: I1125 07:22:31.956591 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:32 crc kubenswrapper[5043]: I1125 07:22:32.037193 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/26ed7ca5-82af-4166-88d0-e0c8021a3a94-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:32 crc kubenswrapper[5043]: I1125 07:22:32.037344 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26ed7ca5-82af-4166-88d0-e0c8021a3a94-trusted-ca\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:32 crc kubenswrapper[5043]: I1125 07:22:32.037409 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/26ed7ca5-82af-4166-88d0-e0c8021a3a94-registry-tls\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:32 crc kubenswrapper[5043]: I1125 07:22:32.037444 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/26ed7ca5-82af-4166-88d0-e0c8021a3a94-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:32 crc kubenswrapper[5043]: I1125 07:22:32.037477 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/26ed7ca5-82af-4166-88d0-e0c8021a3a94-registry-certificates\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:32 crc kubenswrapper[5043]: I1125 07:22:32.037534 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr2fd\" (UniqueName: \"kubernetes.io/projected/26ed7ca5-82af-4166-88d0-e0c8021a3a94-kube-api-access-xr2fd\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:32 crc kubenswrapper[5043]: I1125 07:22:32.037572 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26ed7ca5-82af-4166-88d0-e0c8021a3a94-bound-sa-token\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:32 crc kubenswrapper[5043]: I1125 07:22:32.038844 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/26ed7ca5-82af-4166-88d0-e0c8021a3a94-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:32 crc kubenswrapper[5043]: I1125 07:22:32.039146 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26ed7ca5-82af-4166-88d0-e0c8021a3a94-trusted-ca\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:32 crc kubenswrapper[5043]: I1125 07:22:32.040278 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/26ed7ca5-82af-4166-88d0-e0c8021a3a94-registry-certificates\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:32 crc kubenswrapper[5043]: I1125 07:22:32.049309 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/26ed7ca5-82af-4166-88d0-e0c8021a3a94-registry-tls\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:32 crc kubenswrapper[5043]: I1125 07:22:32.050146 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/26ed7ca5-82af-4166-88d0-e0c8021a3a94-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:32 crc kubenswrapper[5043]: I1125 07:22:32.055685 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr2fd\" (UniqueName: \"kubernetes.io/projected/26ed7ca5-82af-4166-88d0-e0c8021a3a94-kube-api-access-xr2fd\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:32 crc kubenswrapper[5043]: I1125 07:22:32.058764 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26ed7ca5-82af-4166-88d0-e0c8021a3a94-bound-sa-token\") pod \"image-registry-66df7c8f76-2srnk\" (UID: \"26ed7ca5-82af-4166-88d0-e0c8021a3a94\") " pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:32 crc kubenswrapper[5043]: I1125 07:22:32.140893 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:32 crc kubenswrapper[5043]: I1125 07:22:32.550536 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2srnk"] Nov 25 07:22:33 crc kubenswrapper[5043]: I1125 07:22:33.294947 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" event={"ID":"26ed7ca5-82af-4166-88d0-e0c8021a3a94","Type":"ContainerStarted","Data":"2e22ae0311ebb3c75bfedd43d68388d7f916250579d6e9b18c8c254eefae1aa2"} Nov 25 07:22:33 crc kubenswrapper[5043]: I1125 07:22:33.295297 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" event={"ID":"26ed7ca5-82af-4166-88d0-e0c8021a3a94","Type":"ContainerStarted","Data":"51e21f6993f18801a196d7b416a1caaf4fe4246d5556253ce7caed40dabd6ff1"} Nov 25 07:22:33 crc kubenswrapper[5043]: I1125 07:22:33.295317 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:47 crc kubenswrapper[5043]: I1125 07:22:47.276415 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:22:47 crc kubenswrapper[5043]: I1125 07:22:47.277092 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:22:47 crc kubenswrapper[5043]: I1125 07:22:47.277159 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:22:47 crc kubenswrapper[5043]: I1125 07:22:47.278873 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7a9fbecb16e1fce85f482605fc100165adf10eff85021444c5351acd6dfb457"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:22:47 crc kubenswrapper[5043]: I1125 07:22:47.279177 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://d7a9fbecb16e1fce85f482605fc100165adf10eff85021444c5351acd6dfb457" gracePeriod=600 Nov 25 07:22:48 crc kubenswrapper[5043]: I1125 07:22:48.373400 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="d7a9fbecb16e1fce85f482605fc100165adf10eff85021444c5351acd6dfb457" exitCode=0 Nov 25 07:22:48 crc kubenswrapper[5043]: I1125 07:22:48.373472 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"d7a9fbecb16e1fce85f482605fc100165adf10eff85021444c5351acd6dfb457"} Nov 25 07:22:48 crc kubenswrapper[5043]: I1125 07:22:48.373758 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"b1bc33196e4e3b55ff5393f391b973ffa4f1cd291219b6b6ac8f14aff8f26dd4"} Nov 25 07:22:48 crc kubenswrapper[5043]: I1125 07:22:48.373777 5043 scope.go:117] "RemoveContainer" containerID="ce73e562772077e60ee4787a9dbdd3702e7ca7f0d43732e3dd92245a58bc4fdf" Nov 25 07:22:48 crc kubenswrapper[5043]: I1125 07:22:48.392803 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" podStartSLOduration=17.392766267 podStartE2EDuration="17.392766267s" podCreationTimestamp="2025-11-25 07:22:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:22:33.328924874 +0000 UTC m=+417.497120595" watchObservedRunningTime="2025-11-25 07:22:48.392766267 +0000 UTC m=+432.560961988" Nov 25 07:22:52 crc kubenswrapper[5043]: I1125 07:22:52.151957 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-2srnk" Nov 25 07:22:52 crc kubenswrapper[5043]: I1125 07:22:52.250861 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9jj8v"] Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.309014 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" podUID="56d9ce8c-65f4-4482-860f-a7009c96e356" containerName="registry" containerID="cri-o://70ce26b185999d12a68450b62d1f21bb2dff3ffd001f976d7916d026d7899360" gracePeriod=30 Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.569172 5043 generic.go:334] "Generic (PLEG): container finished" podID="56d9ce8c-65f4-4482-860f-a7009c96e356" containerID="70ce26b185999d12a68450b62d1f21bb2dff3ffd001f976d7916d026d7899360" exitCode=0 Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.569298 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" event={"ID":"56d9ce8c-65f4-4482-860f-a7009c96e356","Type":"ContainerDied","Data":"70ce26b185999d12a68450b62d1f21bb2dff3ffd001f976d7916d026d7899360"} Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.750597 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.923033 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/56d9ce8c-65f4-4482-860f-a7009c96e356-installation-pull-secrets\") pod \"56d9ce8c-65f4-4482-860f-a7009c96e356\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.923184 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-bound-sa-token\") pod \"56d9ce8c-65f4-4482-860f-a7009c96e356\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.923323 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-registry-tls\") pod \"56d9ce8c-65f4-4482-860f-a7009c96e356\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.927123 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56d9ce8c-65f4-4482-860f-a7009c96e356-trusted-ca\") pod \"56d9ce8c-65f4-4482-860f-a7009c96e356\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.927378 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"56d9ce8c-65f4-4482-860f-a7009c96e356\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.927449 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdp74\" (UniqueName: \"kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-kube-api-access-bdp74\") pod \"56d9ce8c-65f4-4482-860f-a7009c96e356\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.927513 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/56d9ce8c-65f4-4482-860f-a7009c96e356-ca-trust-extracted\") pod \"56d9ce8c-65f4-4482-860f-a7009c96e356\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.927552 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/56d9ce8c-65f4-4482-860f-a7009c96e356-registry-certificates\") pod \"56d9ce8c-65f4-4482-860f-a7009c96e356\" (UID: \"56d9ce8c-65f4-4482-860f-a7009c96e356\") " Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.928495 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d9ce8c-65f4-4482-860f-a7009c96e356-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "56d9ce8c-65f4-4482-860f-a7009c96e356" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.929646 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d9ce8c-65f4-4482-860f-a7009c96e356-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "56d9ce8c-65f4-4482-860f-a7009c96e356" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.933760 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "56d9ce8c-65f4-4482-860f-a7009c96e356" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.934750 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d9ce8c-65f4-4482-860f-a7009c96e356-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "56d9ce8c-65f4-4482-860f-a7009c96e356" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.937182 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "56d9ce8c-65f4-4482-860f-a7009c96e356" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.937583 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-kube-api-access-bdp74" (OuterVolumeSpecName: "kube-api-access-bdp74") pod "56d9ce8c-65f4-4482-860f-a7009c96e356" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356"). InnerVolumeSpecName "kube-api-access-bdp74". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.944486 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "56d9ce8c-65f4-4482-860f-a7009c96e356" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 07:23:17 crc kubenswrapper[5043]: I1125 07:23:17.951843 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56d9ce8c-65f4-4482-860f-a7009c96e356-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "56d9ce8c-65f4-4482-860f-a7009c96e356" (UID: "56d9ce8c-65f4-4482-860f-a7009c96e356"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:23:18 crc kubenswrapper[5043]: I1125 07:23:18.028999 5043 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/56d9ce8c-65f4-4482-860f-a7009c96e356-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 07:23:18 crc kubenswrapper[5043]: I1125 07:23:18.029039 5043 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/56d9ce8c-65f4-4482-860f-a7009c96e356-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 07:23:18 crc kubenswrapper[5043]: I1125 07:23:18.029777 5043 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/56d9ce8c-65f4-4482-860f-a7009c96e356-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 07:23:18 crc kubenswrapper[5043]: I1125 07:23:18.029816 5043 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 07:23:18 crc kubenswrapper[5043]: I1125 07:23:18.029833 5043 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 07:23:18 crc kubenswrapper[5043]: I1125 07:23:18.029850 5043 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56d9ce8c-65f4-4482-860f-a7009c96e356-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:23:18 crc kubenswrapper[5043]: I1125 07:23:18.029867 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdp74\" (UniqueName: \"kubernetes.io/projected/56d9ce8c-65f4-4482-860f-a7009c96e356-kube-api-access-bdp74\") on node \"crc\" DevicePath \"\"" Nov 25 07:23:18 crc kubenswrapper[5043]: I1125 07:23:18.576114 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" event={"ID":"56d9ce8c-65f4-4482-860f-a7009c96e356","Type":"ContainerDied","Data":"653f9fefc7466b200299197a43f58438b80638f8df660e63ad01d3b69975be90"} Nov 25 07:23:18 crc kubenswrapper[5043]: I1125 07:23:18.576173 5043 scope.go:117] "RemoveContainer" containerID="70ce26b185999d12a68450b62d1f21bb2dff3ffd001f976d7916d026d7899360" Nov 25 07:23:18 crc kubenswrapper[5043]: I1125 07:23:18.576289 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9jj8v" Nov 25 07:23:18 crc kubenswrapper[5043]: I1125 07:23:18.623935 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9jj8v"] Nov 25 07:23:18 crc kubenswrapper[5043]: I1125 07:23:18.630282 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9jj8v"] Nov 25 07:23:18 crc kubenswrapper[5043]: I1125 07:23:18.976143 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d9ce8c-65f4-4482-860f-a7009c96e356" path="/var/lib/kubelet/pods/56d9ce8c-65f4-4482-860f-a7009c96e356/volumes" Nov 25 07:24:47 crc kubenswrapper[5043]: I1125 07:24:47.275808 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:24:47 crc kubenswrapper[5043]: I1125 07:24:47.276474 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:25:17 crc kubenswrapper[5043]: I1125 07:25:17.275788 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:25:17 crc kubenswrapper[5043]: I1125 07:25:17.276485 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:25:47 crc kubenswrapper[5043]: I1125 07:25:47.275791 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:25:47 crc kubenswrapper[5043]: I1125 07:25:47.276250 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:25:47 crc kubenswrapper[5043]: I1125 07:25:47.276310 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:25:47 crc kubenswrapper[5043]: I1125 07:25:47.277022 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b1bc33196e4e3b55ff5393f391b973ffa4f1cd291219b6b6ac8f14aff8f26dd4"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:25:47 crc kubenswrapper[5043]: I1125 07:25:47.277108 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://b1bc33196e4e3b55ff5393f391b973ffa4f1cd291219b6b6ac8f14aff8f26dd4" gracePeriod=600 Nov 25 07:25:47 crc kubenswrapper[5043]: I1125 07:25:47.536647 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="b1bc33196e4e3b55ff5393f391b973ffa4f1cd291219b6b6ac8f14aff8f26dd4" exitCode=0 Nov 25 07:25:47 crc kubenswrapper[5043]: I1125 07:25:47.536729 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"b1bc33196e4e3b55ff5393f391b973ffa4f1cd291219b6b6ac8f14aff8f26dd4"} Nov 25 07:25:47 crc kubenswrapper[5043]: I1125 07:25:47.537300 5043 scope.go:117] "RemoveContainer" containerID="d7a9fbecb16e1fce85f482605fc100165adf10eff85021444c5351acd6dfb457" Nov 25 07:25:48 crc kubenswrapper[5043]: I1125 07:25:48.544214 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"775db4b9aa6c61b7085bc9862b445a04e41b9906b056014fba7881c8d0080c48"} Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.309485 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-9lxcm"] Nov 25 07:26:23 crc kubenswrapper[5043]: E1125 07:26:23.310287 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d9ce8c-65f4-4482-860f-a7009c96e356" containerName="registry" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.310306 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d9ce8c-65f4-4482-860f-a7009c96e356" containerName="registry" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.310446 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d9ce8c-65f4-4482-860f-a7009c96e356" containerName="registry" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.310930 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-9lxcm" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.312644 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.312706 5043 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-pjjr7" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.312887 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.315218 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-42sd8"] Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.315836 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-42sd8" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.317203 5043 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-glst7" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.324025 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjt5l\" (UniqueName: \"kubernetes.io/projected/94518a18-995b-490b-8099-917d5e510ad0-kube-api-access-fjt5l\") pod \"cert-manager-cainjector-7f985d654d-9lxcm\" (UID: \"94518a18-995b-490b-8099-917d5e510ad0\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-9lxcm" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.325167 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-9lxcm"] Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.337689 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-42sd8"] Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.343265 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-qn5hq"] Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.343888 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-qn5hq" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.348661 5043 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-7wqcg" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.373276 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-qn5hq"] Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.424662 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsg5p\" (UniqueName: \"kubernetes.io/projected/00a5ef16-fb0d-4b68-b3aa-92411430aebd-kube-api-access-jsg5p\") pod \"cert-manager-5b446d88c5-42sd8\" (UID: \"00a5ef16-fb0d-4b68-b3aa-92411430aebd\") " pod="cert-manager/cert-manager-5b446d88c5-42sd8" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.424719 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjt5l\" (UniqueName: \"kubernetes.io/projected/94518a18-995b-490b-8099-917d5e510ad0-kube-api-access-fjt5l\") pod \"cert-manager-cainjector-7f985d654d-9lxcm\" (UID: \"94518a18-995b-490b-8099-917d5e510ad0\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-9lxcm" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.424781 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7jqp\" (UniqueName: \"kubernetes.io/projected/6c845b4b-10ec-41bc-8482-14da0da21a03-kube-api-access-g7jqp\") pod \"cert-manager-webhook-5655c58dd6-qn5hq\" (UID: \"6c845b4b-10ec-41bc-8482-14da0da21a03\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-qn5hq" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.444700 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjt5l\" (UniqueName: \"kubernetes.io/projected/94518a18-995b-490b-8099-917d5e510ad0-kube-api-access-fjt5l\") pod \"cert-manager-cainjector-7f985d654d-9lxcm\" (UID: \"94518a18-995b-490b-8099-917d5e510ad0\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-9lxcm" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.525994 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7jqp\" (UniqueName: \"kubernetes.io/projected/6c845b4b-10ec-41bc-8482-14da0da21a03-kube-api-access-g7jqp\") pod \"cert-manager-webhook-5655c58dd6-qn5hq\" (UID: \"6c845b4b-10ec-41bc-8482-14da0da21a03\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-qn5hq" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.526056 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsg5p\" (UniqueName: \"kubernetes.io/projected/00a5ef16-fb0d-4b68-b3aa-92411430aebd-kube-api-access-jsg5p\") pod \"cert-manager-5b446d88c5-42sd8\" (UID: \"00a5ef16-fb0d-4b68-b3aa-92411430aebd\") " pod="cert-manager/cert-manager-5b446d88c5-42sd8" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.550330 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsg5p\" (UniqueName: \"kubernetes.io/projected/00a5ef16-fb0d-4b68-b3aa-92411430aebd-kube-api-access-jsg5p\") pod \"cert-manager-5b446d88c5-42sd8\" (UID: \"00a5ef16-fb0d-4b68-b3aa-92411430aebd\") " pod="cert-manager/cert-manager-5b446d88c5-42sd8" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.550664 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7jqp\" (UniqueName: \"kubernetes.io/projected/6c845b4b-10ec-41bc-8482-14da0da21a03-kube-api-access-g7jqp\") pod \"cert-manager-webhook-5655c58dd6-qn5hq\" (UID: \"6c845b4b-10ec-41bc-8482-14da0da21a03\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-qn5hq" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.632501 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-9lxcm" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.642061 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-42sd8" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.659594 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-qn5hq" Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.845907 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-42sd8"] Nov 25 07:26:23 crc kubenswrapper[5043]: I1125 07:26:23.854068 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 07:26:24 crc kubenswrapper[5043]: I1125 07:26:24.082472 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-qn5hq"] Nov 25 07:26:24 crc kubenswrapper[5043]: W1125 07:26:24.090065 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c845b4b_10ec_41bc_8482_14da0da21a03.slice/crio-32c0615d42eb7549bb32e1d6e55587c33934e3d22b8d5a2b6fa5f2f38792a329 WatchSource:0}: Error finding container 32c0615d42eb7549bb32e1d6e55587c33934e3d22b8d5a2b6fa5f2f38792a329: Status 404 returned error can't find the container with id 32c0615d42eb7549bb32e1d6e55587c33934e3d22b8d5a2b6fa5f2f38792a329 Nov 25 07:26:24 crc kubenswrapper[5043]: I1125 07:26:24.105035 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-9lxcm"] Nov 25 07:26:24 crc kubenswrapper[5043]: W1125 07:26:24.112985 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94518a18_995b_490b_8099_917d5e510ad0.slice/crio-19bafb404cf7545c2902acd7285b21b36d6b61cf41e42cd37c0295339140874f WatchSource:0}: Error finding container 19bafb404cf7545c2902acd7285b21b36d6b61cf41e42cd37c0295339140874f: Status 404 returned error can't find the container with id 19bafb404cf7545c2902acd7285b21b36d6b61cf41e42cd37c0295339140874f Nov 25 07:26:24 crc kubenswrapper[5043]: I1125 07:26:24.800271 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-qn5hq" event={"ID":"6c845b4b-10ec-41bc-8482-14da0da21a03","Type":"ContainerStarted","Data":"32c0615d42eb7549bb32e1d6e55587c33934e3d22b8d5a2b6fa5f2f38792a329"} Nov 25 07:26:24 crc kubenswrapper[5043]: I1125 07:26:24.801990 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-9lxcm" event={"ID":"94518a18-995b-490b-8099-917d5e510ad0","Type":"ContainerStarted","Data":"19bafb404cf7545c2902acd7285b21b36d6b61cf41e42cd37c0295339140874f"} Nov 25 07:26:24 crc kubenswrapper[5043]: I1125 07:26:24.803487 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-42sd8" event={"ID":"00a5ef16-fb0d-4b68-b3aa-92411430aebd","Type":"ContainerStarted","Data":"5b008d01aa2fc95c5fe47c760c001466757cc04af8c2632a3957adeea1546991"} Nov 25 07:26:27 crc kubenswrapper[5043]: I1125 07:26:27.823015 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-qn5hq" event={"ID":"6c845b4b-10ec-41bc-8482-14da0da21a03","Type":"ContainerStarted","Data":"3345c91689bfcc156f87d1c9d3a55ac1fcc67814eb39456f5021d94f5cce416b"} Nov 25 07:26:27 crc kubenswrapper[5043]: I1125 07:26:27.823476 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-qn5hq" Nov 25 07:26:27 crc kubenswrapper[5043]: I1125 07:26:27.825014 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-9lxcm" event={"ID":"94518a18-995b-490b-8099-917d5e510ad0","Type":"ContainerStarted","Data":"e6deaf803ff496f689fd7d56d692aacd97607b753ff10de2c9d5e526424c2900"} Nov 25 07:26:27 crc kubenswrapper[5043]: I1125 07:26:27.826756 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-42sd8" event={"ID":"00a5ef16-fb0d-4b68-b3aa-92411430aebd","Type":"ContainerStarted","Data":"061f88a11c46bbd715cc18fe2da0dadda701d60947b2118bf404869159d16bf1"} Nov 25 07:26:27 crc kubenswrapper[5043]: I1125 07:26:27.838042 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-qn5hq" podStartSLOduration=1.669987156 podStartE2EDuration="4.83802141s" podCreationTimestamp="2025-11-25 07:26:23 +0000 UTC" firstStartedPulling="2025-11-25 07:26:24.093091005 +0000 UTC m=+648.261286726" lastFinishedPulling="2025-11-25 07:26:27.261125259 +0000 UTC m=+651.429320980" observedRunningTime="2025-11-25 07:26:27.835726248 +0000 UTC m=+652.003921989" watchObservedRunningTime="2025-11-25 07:26:27.83802141 +0000 UTC m=+652.006217131" Nov 25 07:26:27 crc kubenswrapper[5043]: I1125 07:26:27.848766 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-42sd8" podStartSLOduration=1.441999258 podStartE2EDuration="4.848748087s" podCreationTimestamp="2025-11-25 07:26:23 +0000 UTC" firstStartedPulling="2025-11-25 07:26:23.853824585 +0000 UTC m=+648.022020306" lastFinishedPulling="2025-11-25 07:26:27.260573404 +0000 UTC m=+651.428769135" observedRunningTime="2025-11-25 07:26:27.847283998 +0000 UTC m=+652.015479719" watchObservedRunningTime="2025-11-25 07:26:27.848748087 +0000 UTC m=+652.016943808" Nov 25 07:26:33 crc kubenswrapper[5043]: I1125 07:26:33.662731 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-qn5hq" Nov 25 07:26:33 crc kubenswrapper[5043]: I1125 07:26:33.680714 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-9lxcm" podStartSLOduration=7.445849389 podStartE2EDuration="10.680684663s" podCreationTimestamp="2025-11-25 07:26:23 +0000 UTC" firstStartedPulling="2025-11-25 07:26:24.115014183 +0000 UTC m=+648.283209904" lastFinishedPulling="2025-11-25 07:26:27.349849457 +0000 UTC m=+651.518045178" observedRunningTime="2025-11-25 07:26:27.857273976 +0000 UTC m=+652.025469707" watchObservedRunningTime="2025-11-25 07:26:33.680684663 +0000 UTC m=+657.848880424" Nov 25 07:26:33 crc kubenswrapper[5043]: I1125 07:26:33.943679 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m5zz6"] Nov 25 07:26:33 crc kubenswrapper[5043]: I1125 07:26:33.944107 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovn-controller" containerID="cri-o://eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1" gracePeriod=30 Nov 25 07:26:33 crc kubenswrapper[5043]: I1125 07:26:33.944160 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="nbdb" containerID="cri-o://4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57" gracePeriod=30 Nov 25 07:26:33 crc kubenswrapper[5043]: I1125 07:26:33.944280 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="northd" containerID="cri-o://73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f" gracePeriod=30 Nov 25 07:26:33 crc kubenswrapper[5043]: I1125 07:26:33.944335 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="kube-rbac-proxy-node" containerID="cri-o://9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245" gracePeriod=30 Nov 25 07:26:33 crc kubenswrapper[5043]: I1125 07:26:33.944341 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="sbdb" containerID="cri-o://ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31" gracePeriod=30 Nov 25 07:26:33 crc kubenswrapper[5043]: I1125 07:26:33.944296 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1" gracePeriod=30 Nov 25 07:26:33 crc kubenswrapper[5043]: I1125 07:26:33.944297 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovn-acl-logging" containerID="cri-o://9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535" gracePeriod=30 Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.000195 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovnkube-controller" containerID="cri-o://dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178" gracePeriod=30 Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.087201 5043 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.088125 5043 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.088429 5043 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.093984 5043 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.094042 5043 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="nbdb" Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.094948 5043 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.096826 5043 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.096899 5043 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="sbdb" Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.285931 5043 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178 is running failed: container process not found" containerID="dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.286875 5043 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178 is running failed: container process not found" containerID="dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.287375 5043 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178 is running failed: container process not found" containerID="dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.287416 5043 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovnkube-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.844232 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovnkube-controller/3.log" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.851744 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovn-acl-logging/0.log" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.852827 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovn-controller/0.log" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.853446 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.871186 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovnkube-controller/3.log" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.873840 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovn-acl-logging/0.log" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.874507 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5zz6_a8785a4c-82ff-4a78-83a0-463e977df530/ovn-controller/0.log" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.874997 5043 generic.go:334] "Generic (PLEG): container finished" podID="a8785a4c-82ff-4a78-83a0-463e977df530" containerID="dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178" exitCode=0 Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875040 5043 generic.go:334] "Generic (PLEG): container finished" podID="a8785a4c-82ff-4a78-83a0-463e977df530" containerID="ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31" exitCode=0 Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875054 5043 generic.go:334] "Generic (PLEG): container finished" podID="a8785a4c-82ff-4a78-83a0-463e977df530" containerID="4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57" exitCode=0 Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875067 5043 generic.go:334] "Generic (PLEG): container finished" podID="a8785a4c-82ff-4a78-83a0-463e977df530" containerID="73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f" exitCode=0 Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875082 5043 generic.go:334] "Generic (PLEG): container finished" podID="a8785a4c-82ff-4a78-83a0-463e977df530" containerID="2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1" exitCode=0 Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875094 5043 generic.go:334] "Generic (PLEG): container finished" podID="a8785a4c-82ff-4a78-83a0-463e977df530" containerID="9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245" exitCode=0 Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875090 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerDied","Data":"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875125 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875145 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerDied","Data":"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875169 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerDied","Data":"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875190 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerDied","Data":"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875208 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerDied","Data":"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875106 5043 generic.go:334] "Generic (PLEG): container finished" podID="a8785a4c-82ff-4a78-83a0-463e977df530" containerID="9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535" exitCode=143 Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875229 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerDied","Data":"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875235 5043 generic.go:334] "Generic (PLEG): container finished" podID="a8785a4c-82ff-4a78-83a0-463e977df530" containerID="eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1" exitCode=143 Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875248 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875264 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875276 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875328 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875340 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875351 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875362 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875373 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875384 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875403 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerDied","Data":"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875419 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875432 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875443 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875454 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875467 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875481 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875498 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875511 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875527 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875540 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875560 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerDied","Data":"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875582 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875595 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875606 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875661 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875672 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875682 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875693 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875703 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875713 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875724 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875739 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5zz6" event={"ID":"a8785a4c-82ff-4a78-83a0-463e977df530","Type":"ContainerDied","Data":"947a2e76ce2256473238a7415a5cbad64ee0d3874e34aed34079e323608d783d"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875759 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875775 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875786 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875796 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875806 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875817 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875827 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875838 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875377 5043 scope.go:117] "RemoveContainer" containerID="dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.875849 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.876005 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.877782 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5gnzs_6aa0c167-9335-44ce-975c-715ce1f43383/kube-multus/2.log" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.878328 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5gnzs_6aa0c167-9335-44ce-975c-715ce1f43383/kube-multus/1.log" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.878384 5043 generic.go:334] "Generic (PLEG): container finished" podID="6aa0c167-9335-44ce-975c-715ce1f43383" containerID="b38ec2c1857f8d09dc1e1bf719e08fa2ba97a2d42a1d582846b46e950df84a94" exitCode=2 Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.878422 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5gnzs" event={"ID":"6aa0c167-9335-44ce-975c-715ce1f43383","Type":"ContainerDied","Data":"b38ec2c1857f8d09dc1e1bf719e08fa2ba97a2d42a1d582846b46e950df84a94"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.878449 5043 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4066fa7f0a925be9090ea5c1746c5f49e5e16dbfbaf8855136d7417ba73fb59c"} Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.878974 5043 scope.go:117] "RemoveContainer" containerID="b38ec2c1857f8d09dc1e1bf719e08fa2ba97a2d42a1d582846b46e950df84a94" Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.879264 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5gnzs_openshift-multus(6aa0c167-9335-44ce-975c-715ce1f43383)\"" pod="openshift-multus/multus-5gnzs" podUID="6aa0c167-9335-44ce-975c-715ce1f43383" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.897232 5043 scope.go:117] "RemoveContainer" containerID="37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.923474 5043 scope.go:117] "RemoveContainer" containerID="ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.924955 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k9xs6"] Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.925290 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="northd" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925310 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="northd" Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.925326 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="sbdb" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925337 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="sbdb" Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.925357 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovn-acl-logging" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925368 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovn-acl-logging" Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.925380 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovn-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925391 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovn-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.925412 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovnkube-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925423 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovnkube-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.925439 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="kube-rbac-proxy-node" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925449 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="kube-rbac-proxy-node" Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.925469 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovnkube-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925480 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovnkube-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.925500 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovnkube-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925511 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovnkube-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.925524 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovnkube-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925536 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovnkube-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.925552 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="kubecfg-setup" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925563 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="kubecfg-setup" Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.925579 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="nbdb" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925589 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="nbdb" Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.925611 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925649 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 07:26:34 crc kubenswrapper[5043]: E1125 07:26:34.925668 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovnkube-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925679 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovnkube-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925851 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="nbdb" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925871 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovn-acl-logging" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925885 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovnkube-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925904 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovnkube-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925918 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovnkube-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925929 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925943 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="northd" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925959 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovn-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925973 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="sbdb" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925986 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="kube-rbac-proxy-node" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.925999 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovnkube-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.926305 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" containerName="ovnkube-controller" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.928785 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.954741 5043 scope.go:117] "RemoveContainer" containerID="4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.990119 5043 scope.go:117] "RemoveContainer" containerID="73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996635 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996683 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-ovn\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996720 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-systemd-units\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996750 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-run-ovn-kubernetes\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996775 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-run-netns\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996776 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996791 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-slash\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996812 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996819 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-env-overrides\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996832 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996835 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-log-socket\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996860 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-cni-bin\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996885 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-ovnkube-config\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996815 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996906 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a8785a4c-82ff-4a78-83a0-463e977df530-ovn-node-metrics-cert\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996857 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-log-socket" (OuterVolumeSpecName: "log-socket") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996929 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-systemd\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996958 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-cni-netd\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996974 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-kubelet\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996988 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-node-log\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.997002 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-openvswitch\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.997018 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-ovnkube-script-lib\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.997032 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-var-lib-openvswitch\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.997056 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrzpn\" (UniqueName: \"kubernetes.io/projected/a8785a4c-82ff-4a78-83a0-463e977df530-kube-api-access-hrzpn\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.997081 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-etc-openvswitch\") pod \"a8785a4c-82ff-4a78-83a0-463e977df530\" (UID: \"a8785a4c-82ff-4a78-83a0-463e977df530\") " Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.997312 5043 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-log-socket\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.997332 5043 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.997342 5043 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.997351 5043 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.997359 5043 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996862 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-slash" (OuterVolumeSpecName: "host-slash") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.996886 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.997218 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.997242 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.997259 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-node-log" (OuterVolumeSpecName: "node-log") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.997345 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.997618 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.997934 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.997960 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.998435 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.998828 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:26:34 crc kubenswrapper[5043]: I1125 07:26:34.998864 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.002849 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8785a4c-82ff-4a78-83a0-463e977df530-kube-api-access-hrzpn" (OuterVolumeSpecName: "kube-api-access-hrzpn") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "kube-api-access-hrzpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.003024 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8785a4c-82ff-4a78-83a0-463e977df530-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.003990 5043 scope.go:117] "RemoveContainer" containerID="2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.012519 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a8785a4c-82ff-4a78-83a0-463e977df530" (UID: "a8785a4c-82ff-4a78-83a0-463e977df530"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.016840 5043 scope.go:117] "RemoveContainer" containerID="9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.029293 5043 scope.go:117] "RemoveContainer" containerID="9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.042280 5043 scope.go:117] "RemoveContainer" containerID="eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.062607 5043 scope.go:117] "RemoveContainer" containerID="2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.087997 5043 scope.go:117] "RemoveContainer" containerID="dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178" Nov 25 07:26:35 crc kubenswrapper[5043]: E1125 07:26:35.088593 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178\": container with ID starting with dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178 not found: ID does not exist" containerID="dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.088653 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178"} err="failed to get container status \"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178\": rpc error: code = NotFound desc = could not find container \"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178\": container with ID starting with dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.088688 5043 scope.go:117] "RemoveContainer" containerID="37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63" Nov 25 07:26:35 crc kubenswrapper[5043]: E1125 07:26:35.089219 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\": container with ID starting with 37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63 not found: ID does not exist" containerID="37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.089261 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63"} err="failed to get container status \"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\": rpc error: code = NotFound desc = could not find container \"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\": container with ID starting with 37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.089291 5043 scope.go:117] "RemoveContainer" containerID="ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31" Nov 25 07:26:35 crc kubenswrapper[5043]: E1125 07:26:35.089726 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\": container with ID starting with ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31 not found: ID does not exist" containerID="ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.089761 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31"} err="failed to get container status \"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\": rpc error: code = NotFound desc = could not find container \"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\": container with ID starting with ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.089782 5043 scope.go:117] "RemoveContainer" containerID="4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57" Nov 25 07:26:35 crc kubenswrapper[5043]: E1125 07:26:35.090025 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\": container with ID starting with 4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57 not found: ID does not exist" containerID="4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.090065 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57"} err="failed to get container status \"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\": rpc error: code = NotFound desc = could not find container \"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\": container with ID starting with 4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.090084 5043 scope.go:117] "RemoveContainer" containerID="73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f" Nov 25 07:26:35 crc kubenswrapper[5043]: E1125 07:26:35.090396 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\": container with ID starting with 73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f not found: ID does not exist" containerID="73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.090420 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f"} err="failed to get container status \"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\": rpc error: code = NotFound desc = could not find container \"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\": container with ID starting with 73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.090436 5043 scope.go:117] "RemoveContainer" containerID="2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1" Nov 25 07:26:35 crc kubenswrapper[5043]: E1125 07:26:35.090686 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\": container with ID starting with 2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1 not found: ID does not exist" containerID="2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.090714 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1"} err="failed to get container status \"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\": rpc error: code = NotFound desc = could not find container \"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\": container with ID starting with 2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.090732 5043 scope.go:117] "RemoveContainer" containerID="9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245" Nov 25 07:26:35 crc kubenswrapper[5043]: E1125 07:26:35.091092 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\": container with ID starting with 9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245 not found: ID does not exist" containerID="9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.091121 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245"} err="failed to get container status \"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\": rpc error: code = NotFound desc = could not find container \"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\": container with ID starting with 9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.091140 5043 scope.go:117] "RemoveContainer" containerID="9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535" Nov 25 07:26:35 crc kubenswrapper[5043]: E1125 07:26:35.091642 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\": container with ID starting with 9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535 not found: ID does not exist" containerID="9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.091677 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535"} err="failed to get container status \"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\": rpc error: code = NotFound desc = could not find container \"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\": container with ID starting with 9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.091697 5043 scope.go:117] "RemoveContainer" containerID="eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1" Nov 25 07:26:35 crc kubenswrapper[5043]: E1125 07:26:35.091996 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\": container with ID starting with eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1 not found: ID does not exist" containerID="eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.092051 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1"} err="failed to get container status \"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\": rpc error: code = NotFound desc = could not find container \"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\": container with ID starting with eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.092067 5043 scope.go:117] "RemoveContainer" containerID="2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16" Nov 25 07:26:35 crc kubenswrapper[5043]: E1125 07:26:35.092437 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\": container with ID starting with 2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16 not found: ID does not exist" containerID="2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.092457 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16"} err="failed to get container status \"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\": rpc error: code = NotFound desc = could not find container \"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\": container with ID starting with 2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.092470 5043 scope.go:117] "RemoveContainer" containerID="dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.092781 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178"} err="failed to get container status \"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178\": rpc error: code = NotFound desc = could not find container \"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178\": container with ID starting with dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.092798 5043 scope.go:117] "RemoveContainer" containerID="37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.093137 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63"} err="failed to get container status \"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\": rpc error: code = NotFound desc = could not find container \"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\": container with ID starting with 37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.093170 5043 scope.go:117] "RemoveContainer" containerID="ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.093456 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31"} err="failed to get container status \"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\": rpc error: code = NotFound desc = could not find container \"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\": container with ID starting with ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.093475 5043 scope.go:117] "RemoveContainer" containerID="4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.093765 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57"} err="failed to get container status \"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\": rpc error: code = NotFound desc = could not find container \"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\": container with ID starting with 4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.093795 5043 scope.go:117] "RemoveContainer" containerID="73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.094058 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f"} err="failed to get container status \"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\": rpc error: code = NotFound desc = could not find container \"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\": container with ID starting with 73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.094077 5043 scope.go:117] "RemoveContainer" containerID="2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.094335 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1"} err="failed to get container status \"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\": rpc error: code = NotFound desc = could not find container \"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\": container with ID starting with 2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.094413 5043 scope.go:117] "RemoveContainer" containerID="9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.094751 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245"} err="failed to get container status \"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\": rpc error: code = NotFound desc = could not find container \"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\": container with ID starting with 9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.094835 5043 scope.go:117] "RemoveContainer" containerID="9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.095118 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535"} err="failed to get container status \"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\": rpc error: code = NotFound desc = could not find container \"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\": container with ID starting with 9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.095138 5043 scope.go:117] "RemoveContainer" containerID="eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.095462 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1"} err="failed to get container status \"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\": rpc error: code = NotFound desc = could not find container \"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\": container with ID starting with eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.095526 5043 scope.go:117] "RemoveContainer" containerID="2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.095803 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16"} err="failed to get container status \"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\": rpc error: code = NotFound desc = could not find container \"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\": container with ID starting with 2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.095884 5043 scope.go:117] "RemoveContainer" containerID="dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.096221 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178"} err="failed to get container status \"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178\": rpc error: code = NotFound desc = could not find container \"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178\": container with ID starting with dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.096247 5043 scope.go:117] "RemoveContainer" containerID="37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.096488 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63"} err="failed to get container status \"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\": rpc error: code = NotFound desc = could not find container \"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\": container with ID starting with 37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.096522 5043 scope.go:117] "RemoveContainer" containerID="ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.096803 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31"} err="failed to get container status \"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\": rpc error: code = NotFound desc = could not find container \"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\": container with ID starting with ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.096829 5043 scope.go:117] "RemoveContainer" containerID="4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.097097 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57"} err="failed to get container status \"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\": rpc error: code = NotFound desc = could not find container \"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\": container with ID starting with 4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.097117 5043 scope.go:117] "RemoveContainer" containerID="73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.097369 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f"} err="failed to get container status \"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\": rpc error: code = NotFound desc = could not find container \"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\": container with ID starting with 73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.097394 5043 scope.go:117] "RemoveContainer" containerID="2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.097625 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1"} err="failed to get container status \"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\": rpc error: code = NotFound desc = could not find container \"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\": container with ID starting with 2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.097715 5043 scope.go:117] "RemoveContainer" containerID="9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.098063 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-etc-openvswitch\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.098102 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-var-lib-openvswitch\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.098154 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-ovnkube-script-lib\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.098292 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245"} err="failed to get container status \"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\": rpc error: code = NotFound desc = could not find container \"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\": container with ID starting with 9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.098383 5043 scope.go:117] "RemoveContainer" containerID="9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.098676 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-node-log\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.098713 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-log-socket\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.098742 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-run-ovn-kubernetes\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.098800 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-cni-netd\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.098858 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-slash\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.098899 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-env-overrides\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.098922 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-ovn-node-metrics-cert\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.098949 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-kubelet\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099079 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-run-systemd\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099148 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099215 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-run-ovn\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099246 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-ovnkube-config\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099280 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-run-openvswitch\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099322 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-systemd-units\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099352 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-cni-bin\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099385 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxlg6\" (UniqueName: \"kubernetes.io/projected/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-kube-api-access-jxlg6\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099550 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-run-netns\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099635 5043 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099652 5043 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099599 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535"} err="failed to get container status \"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\": rpc error: code = NotFound desc = could not find container \"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\": container with ID starting with 9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099665 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrzpn\" (UniqueName: \"kubernetes.io/projected/a8785a4c-82ff-4a78-83a0-463e977df530-kube-api-access-hrzpn\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099671 5043 scope.go:117] "RemoveContainer" containerID="eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099679 5043 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099695 5043 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099705 5043 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-slash\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099715 5043 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099723 5043 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099732 5043 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a8785a4c-82ff-4a78-83a0-463e977df530-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099741 5043 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a8785a4c-82ff-4a78-83a0-463e977df530-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099751 5043 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099762 5043 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099771 5043 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099780 5043 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-node-log\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099789 5043 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a8785a4c-82ff-4a78-83a0-463e977df530-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.099998 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1"} err="failed to get container status \"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\": rpc error: code = NotFound desc = could not find container \"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\": container with ID starting with eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.100038 5043 scope.go:117] "RemoveContainer" containerID="2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.100491 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16"} err="failed to get container status \"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\": rpc error: code = NotFound desc = could not find container \"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\": container with ID starting with 2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.100529 5043 scope.go:117] "RemoveContainer" containerID="dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.100864 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178"} err="failed to get container status \"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178\": rpc error: code = NotFound desc = could not find container \"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178\": container with ID starting with dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.100899 5043 scope.go:117] "RemoveContainer" containerID="37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.101198 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63"} err="failed to get container status \"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\": rpc error: code = NotFound desc = could not find container \"37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63\": container with ID starting with 37dda97b095c3bdb3074041d624fc3905d5a778c7c87a04c1543ae3c101f8d63 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.101219 5043 scope.go:117] "RemoveContainer" containerID="ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.101453 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31"} err="failed to get container status \"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\": rpc error: code = NotFound desc = could not find container \"ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31\": container with ID starting with ce721be4df7b0fd578d89e17412c9dd06027b6ae2ce18d813f1c2cb080f07e31 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.101548 5043 scope.go:117] "RemoveContainer" containerID="4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.101986 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57"} err="failed to get container status \"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\": rpc error: code = NotFound desc = could not find container \"4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57\": container with ID starting with 4e7780fde350c7587173002fbf998cf7ad491edf79c4e4c73ce6b30dbd2b8f57 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.102008 5043 scope.go:117] "RemoveContainer" containerID="73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.102287 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f"} err="failed to get container status \"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\": rpc error: code = NotFound desc = could not find container \"73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f\": container with ID starting with 73658c66dad88601d720c508c07f76f8b6d62d42f891d811bfe396a0675a971f not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.102380 5043 scope.go:117] "RemoveContainer" containerID="2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.102842 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1"} err="failed to get container status \"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\": rpc error: code = NotFound desc = could not find container \"2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1\": container with ID starting with 2ab54969e9c3b654a40cfea06fc3e2eb89a67f85b9187df76e9f15780f89a8d1 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.102947 5043 scope.go:117] "RemoveContainer" containerID="9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.103255 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245"} err="failed to get container status \"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\": rpc error: code = NotFound desc = could not find container \"9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245\": container with ID starting with 9c5c87fac929e7857b894adbcf7a0882c1d60ec51d80ea285cc1fb06194cd245 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.103304 5043 scope.go:117] "RemoveContainer" containerID="9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.103612 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535"} err="failed to get container status \"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\": rpc error: code = NotFound desc = could not find container \"9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535\": container with ID starting with 9c3e2e8f5b9ef12af292c1f34ce5ff59eaab9c7c20b168b2054bc9b24d4b5535 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.103741 5043 scope.go:117] "RemoveContainer" containerID="eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.104092 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1"} err="failed to get container status \"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\": rpc error: code = NotFound desc = could not find container \"eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1\": container with ID starting with eb0ed4c0f2f4cd8a22656630b9d72bafe6c318713c852ad1b8f63bd4e29191d1 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.104118 5043 scope.go:117] "RemoveContainer" containerID="2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.104389 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16"} err="failed to get container status \"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\": rpc error: code = NotFound desc = could not find container \"2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16\": container with ID starting with 2bda42de7517efe6a79f56ce676c1060a14d926c4a3ac0735afc7cecadf09a16 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.104413 5043 scope.go:117] "RemoveContainer" containerID="dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.104671 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178"} err="failed to get container status \"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178\": rpc error: code = NotFound desc = could not find container \"dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178\": container with ID starting with dbadca73f9e57d23ddcdd4d4af49354af47be5941e66902217783c8986fd4178 not found: ID does not exist" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.201806 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-var-lib-openvswitch\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.201869 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-ovnkube-script-lib\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.201914 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-node-log\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.201945 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-log-socket\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.201973 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-run-ovn-kubernetes\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.202003 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-cni-netd\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.202166 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-node-log\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.202226 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-log-socket\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.202854 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-cni-netd\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.202907 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-slash\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.202917 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-run-ovn-kubernetes\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.202919 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-var-lib-openvswitch\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.202947 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-slash\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203042 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-env-overrides\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203091 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-ovn-node-metrics-cert\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203135 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-kubelet\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203182 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-run-systemd\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203217 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203276 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-kubelet\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203293 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-run-ovn\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203306 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-run-systemd\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203325 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-ovnkube-config\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203336 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203352 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-run-ovn\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203360 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-run-openvswitch\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203389 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-run-openvswitch\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203419 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-systemd-units\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203460 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-cni-bin\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203487 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxlg6\" (UniqueName: \"kubernetes.io/projected/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-kube-api-access-jxlg6\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203506 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-systemd-units\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203517 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-cni-bin\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203564 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-run-netns\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203617 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-etc-openvswitch\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203649 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-host-run-netns\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.203750 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-etc-openvswitch\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.204045 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-env-overrides\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.204281 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-ovnkube-config\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.204475 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-ovnkube-script-lib\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.209816 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-ovn-node-metrics-cert\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.225820 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m5zz6"] Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.225871 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m5zz6"] Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.226278 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxlg6\" (UniqueName: \"kubernetes.io/projected/6efbdd18-efe1-4ff3-a752-e6c8ccaf411f-kube-api-access-jxlg6\") pod \"ovnkube-node-k9xs6\" (UID: \"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.269563 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:35 crc kubenswrapper[5043]: W1125 07:26:35.287637 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6efbdd18_efe1_4ff3_a752_e6c8ccaf411f.slice/crio-42105522ce679ae1a4f570bb83e8344d24dd590528292f8e54e10c8f2df0773e WatchSource:0}: Error finding container 42105522ce679ae1a4f570bb83e8344d24dd590528292f8e54e10c8f2df0773e: Status 404 returned error can't find the container with id 42105522ce679ae1a4f570bb83e8344d24dd590528292f8e54e10c8f2df0773e Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.888512 5043 generic.go:334] "Generic (PLEG): container finished" podID="6efbdd18-efe1-4ff3-a752-e6c8ccaf411f" containerID="435bc5af422f2242c2e37404f4bc9eb72889d9fdd186adc513496bd95d85e4bb" exitCode=0 Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.888651 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" event={"ID":"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f","Type":"ContainerDied","Data":"435bc5af422f2242c2e37404f4bc9eb72889d9fdd186adc513496bd95d85e4bb"} Nov 25 07:26:35 crc kubenswrapper[5043]: I1125 07:26:35.888737 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" event={"ID":"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f","Type":"ContainerStarted","Data":"42105522ce679ae1a4f570bb83e8344d24dd590528292f8e54e10c8f2df0773e"} Nov 25 07:26:36 crc kubenswrapper[5043]: I1125 07:26:36.897400 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" event={"ID":"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f","Type":"ContainerStarted","Data":"f29941ec32e5a5f809a2a81fc6c80a7b5ba8e79d7ad663bc5946256ae1b1f586"} Nov 25 07:26:36 crc kubenswrapper[5043]: I1125 07:26:36.897748 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" event={"ID":"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f","Type":"ContainerStarted","Data":"c5eb6044e78a1bc1b2ce16c5a10b5e914740b5aac4b939e61b39a78cc915c198"} Nov 25 07:26:36 crc kubenswrapper[5043]: I1125 07:26:36.897758 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" event={"ID":"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f","Type":"ContainerStarted","Data":"3531dbdc406c62f04083d590da0db7008998b563f5dadaf31faa99bb8a436ce8"} Nov 25 07:26:36 crc kubenswrapper[5043]: I1125 07:26:36.897767 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" event={"ID":"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f","Type":"ContainerStarted","Data":"5b08404da177d7f66cab204eb9ec69429f752f1d3088f27c1c7f985630e801ab"} Nov 25 07:26:36 crc kubenswrapper[5043]: I1125 07:26:36.897776 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" event={"ID":"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f","Type":"ContainerStarted","Data":"9f48d812a67d35b42f73fe3d6171102db02c329a077f6ae4e0581b193ebfe9ab"} Nov 25 07:26:36 crc kubenswrapper[5043]: I1125 07:26:36.897784 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" event={"ID":"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f","Type":"ContainerStarted","Data":"5c49e7477659c38c5c2a8474b0853cfe3699ac82a0628f4f7e693cc5583193b8"} Nov 25 07:26:36 crc kubenswrapper[5043]: I1125 07:26:36.970101 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8785a4c-82ff-4a78-83a0-463e977df530" path="/var/lib/kubelet/pods/a8785a4c-82ff-4a78-83a0-463e977df530/volumes" Nov 25 07:26:37 crc kubenswrapper[5043]: I1125 07:26:37.173855 5043 scope.go:117] "RemoveContainer" containerID="4066fa7f0a925be9090ea5c1746c5f49e5e16dbfbaf8855136d7417ba73fb59c" Nov 25 07:26:37 crc kubenswrapper[5043]: I1125 07:26:37.907513 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5gnzs_6aa0c167-9335-44ce-975c-715ce1f43383/kube-multus/2.log" Nov 25 07:26:39 crc kubenswrapper[5043]: I1125 07:26:39.929186 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" event={"ID":"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f","Type":"ContainerStarted","Data":"3e589f613e465ee2e835b01e4c72631eea751932ffac9dd7a22c281ecc41b117"} Nov 25 07:26:41 crc kubenswrapper[5043]: I1125 07:26:41.942687 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" event={"ID":"6efbdd18-efe1-4ff3-a752-e6c8ccaf411f","Type":"ContainerStarted","Data":"49c9dff14aee214c5b3182f79fe44e34e712ad598386249b6deb7a324b6ef2bc"} Nov 25 07:26:41 crc kubenswrapper[5043]: I1125 07:26:41.943077 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:41 crc kubenswrapper[5043]: I1125 07:26:41.943111 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:41 crc kubenswrapper[5043]: I1125 07:26:41.975215 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" podStartSLOduration=7.975196724 podStartE2EDuration="7.975196724s" podCreationTimestamp="2025-11-25 07:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:26:41.974115625 +0000 UTC m=+666.142311356" watchObservedRunningTime="2025-11-25 07:26:41.975196724 +0000 UTC m=+666.143392455" Nov 25 07:26:41 crc kubenswrapper[5043]: I1125 07:26:41.979964 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:42 crc kubenswrapper[5043]: I1125 07:26:42.949786 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:42 crc kubenswrapper[5043]: I1125 07:26:42.975438 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:26:48 crc kubenswrapper[5043]: I1125 07:26:48.962823 5043 scope.go:117] "RemoveContainer" containerID="b38ec2c1857f8d09dc1e1bf719e08fa2ba97a2d42a1d582846b46e950df84a94" Nov 25 07:26:48 crc kubenswrapper[5043]: E1125 07:26:48.963739 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5gnzs_openshift-multus(6aa0c167-9335-44ce-975c-715ce1f43383)\"" pod="openshift-multus/multus-5gnzs" podUID="6aa0c167-9335-44ce-975c-715ce1f43383" Nov 25 07:26:59 crc kubenswrapper[5043]: I1125 07:26:59.962921 5043 scope.go:117] "RemoveContainer" containerID="b38ec2c1857f8d09dc1e1bf719e08fa2ba97a2d42a1d582846b46e950df84a94" Nov 25 07:27:01 crc kubenswrapper[5043]: I1125 07:27:01.080355 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5gnzs_6aa0c167-9335-44ce-975c-715ce1f43383/kube-multus/2.log" Nov 25 07:27:01 crc kubenswrapper[5043]: I1125 07:27:01.080642 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5gnzs" event={"ID":"6aa0c167-9335-44ce-975c-715ce1f43383","Type":"ContainerStarted","Data":"f54c4311801392f181699027aa6c8a5757adae6fff4ed09070a2fc6c4e7ef10c"} Nov 25 07:27:05 crc kubenswrapper[5043]: I1125 07:27:05.303409 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k9xs6" Nov 25 07:27:18 crc kubenswrapper[5043]: I1125 07:27:18.845709 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v"] Nov 25 07:27:18 crc kubenswrapper[5043]: I1125 07:27:18.847508 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" Nov 25 07:27:18 crc kubenswrapper[5043]: I1125 07:27:18.849679 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 07:27:18 crc kubenswrapper[5043]: I1125 07:27:18.862398 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v"] Nov 25 07:27:19 crc kubenswrapper[5043]: I1125 07:27:19.006555 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51d2d2e9-ab00-458f-b284-965e99abbdb3-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v\" (UID: \"51d2d2e9-ab00-458f-b284-965e99abbdb3\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" Nov 25 07:27:19 crc kubenswrapper[5043]: I1125 07:27:19.006655 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51d2d2e9-ab00-458f-b284-965e99abbdb3-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v\" (UID: \"51d2d2e9-ab00-458f-b284-965e99abbdb3\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" Nov 25 07:27:19 crc kubenswrapper[5043]: I1125 07:27:19.006702 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx6bt\" (UniqueName: \"kubernetes.io/projected/51d2d2e9-ab00-458f-b284-965e99abbdb3-kube-api-access-sx6bt\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v\" (UID: \"51d2d2e9-ab00-458f-b284-965e99abbdb3\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" Nov 25 07:27:19 crc kubenswrapper[5043]: I1125 07:27:19.107554 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx6bt\" (UniqueName: \"kubernetes.io/projected/51d2d2e9-ab00-458f-b284-965e99abbdb3-kube-api-access-sx6bt\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v\" (UID: \"51d2d2e9-ab00-458f-b284-965e99abbdb3\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" Nov 25 07:27:19 crc kubenswrapper[5043]: I1125 07:27:19.107781 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51d2d2e9-ab00-458f-b284-965e99abbdb3-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v\" (UID: \"51d2d2e9-ab00-458f-b284-965e99abbdb3\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" Nov 25 07:27:19 crc kubenswrapper[5043]: I1125 07:27:19.107924 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51d2d2e9-ab00-458f-b284-965e99abbdb3-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v\" (UID: \"51d2d2e9-ab00-458f-b284-965e99abbdb3\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" Nov 25 07:27:19 crc kubenswrapper[5043]: I1125 07:27:19.108442 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51d2d2e9-ab00-458f-b284-965e99abbdb3-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v\" (UID: \"51d2d2e9-ab00-458f-b284-965e99abbdb3\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" Nov 25 07:27:19 crc kubenswrapper[5043]: I1125 07:27:19.108728 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51d2d2e9-ab00-458f-b284-965e99abbdb3-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v\" (UID: \"51d2d2e9-ab00-458f-b284-965e99abbdb3\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" Nov 25 07:27:19 crc kubenswrapper[5043]: I1125 07:27:19.141758 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx6bt\" (UniqueName: \"kubernetes.io/projected/51d2d2e9-ab00-458f-b284-965e99abbdb3-kube-api-access-sx6bt\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v\" (UID: \"51d2d2e9-ab00-458f-b284-965e99abbdb3\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" Nov 25 07:27:19 crc kubenswrapper[5043]: I1125 07:27:19.164581 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" Nov 25 07:27:19 crc kubenswrapper[5043]: I1125 07:27:19.415735 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v"] Nov 25 07:27:19 crc kubenswrapper[5043]: W1125 07:27:19.425121 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51d2d2e9_ab00_458f_b284_965e99abbdb3.slice/crio-091f95770046417c8afca08b797bd2a052d711ea911098f6647e2bc330c60db3 WatchSource:0}: Error finding container 091f95770046417c8afca08b797bd2a052d711ea911098f6647e2bc330c60db3: Status 404 returned error can't find the container with id 091f95770046417c8afca08b797bd2a052d711ea911098f6647e2bc330c60db3 Nov 25 07:27:20 crc kubenswrapper[5043]: I1125 07:27:20.204164 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" event={"ID":"51d2d2e9-ab00-458f-b284-965e99abbdb3","Type":"ContainerStarted","Data":"a0416f5b5692607b6a61bff96980218ad7cd64bc44482180a19008b2f2c349f4"} Nov 25 07:27:20 crc kubenswrapper[5043]: I1125 07:27:20.205748 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" event={"ID":"51d2d2e9-ab00-458f-b284-965e99abbdb3","Type":"ContainerStarted","Data":"091f95770046417c8afca08b797bd2a052d711ea911098f6647e2bc330c60db3"} Nov 25 07:27:21 crc kubenswrapper[5043]: I1125 07:27:21.219244 5043 generic.go:334] "Generic (PLEG): container finished" podID="51d2d2e9-ab00-458f-b284-965e99abbdb3" containerID="a0416f5b5692607b6a61bff96980218ad7cd64bc44482180a19008b2f2c349f4" exitCode=0 Nov 25 07:27:21 crc kubenswrapper[5043]: I1125 07:27:21.219347 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" event={"ID":"51d2d2e9-ab00-458f-b284-965e99abbdb3","Type":"ContainerDied","Data":"a0416f5b5692607b6a61bff96980218ad7cd64bc44482180a19008b2f2c349f4"} Nov 25 07:27:24 crc kubenswrapper[5043]: I1125 07:27:24.238461 5043 generic.go:334] "Generic (PLEG): container finished" podID="51d2d2e9-ab00-458f-b284-965e99abbdb3" containerID="0685e335ce2773690a46f3d37cb98f55e35cef45189f4931e2f498cb8b5119a2" exitCode=0 Nov 25 07:27:24 crc kubenswrapper[5043]: I1125 07:27:24.238519 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" event={"ID":"51d2d2e9-ab00-458f-b284-965e99abbdb3","Type":"ContainerDied","Data":"0685e335ce2773690a46f3d37cb98f55e35cef45189f4931e2f498cb8b5119a2"} Nov 25 07:27:25 crc kubenswrapper[5043]: I1125 07:27:25.252186 5043 generic.go:334] "Generic (PLEG): container finished" podID="51d2d2e9-ab00-458f-b284-965e99abbdb3" containerID="328f98fa7261ee3a8e91d11dff0cb01e88d9461926595bd66e672cce46c57571" exitCode=0 Nov 25 07:27:25 crc kubenswrapper[5043]: I1125 07:27:25.252222 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" event={"ID":"51d2d2e9-ab00-458f-b284-965e99abbdb3","Type":"ContainerDied","Data":"328f98fa7261ee3a8e91d11dff0cb01e88d9461926595bd66e672cce46c57571"} Nov 25 07:27:26 crc kubenswrapper[5043]: I1125 07:27:26.515525 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" Nov 25 07:27:26 crc kubenswrapper[5043]: I1125 07:27:26.608904 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51d2d2e9-ab00-458f-b284-965e99abbdb3-util\") pod \"51d2d2e9-ab00-458f-b284-965e99abbdb3\" (UID: \"51d2d2e9-ab00-458f-b284-965e99abbdb3\") " Nov 25 07:27:26 crc kubenswrapper[5043]: I1125 07:27:26.609027 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx6bt\" (UniqueName: \"kubernetes.io/projected/51d2d2e9-ab00-458f-b284-965e99abbdb3-kube-api-access-sx6bt\") pod \"51d2d2e9-ab00-458f-b284-965e99abbdb3\" (UID: \"51d2d2e9-ab00-458f-b284-965e99abbdb3\") " Nov 25 07:27:26 crc kubenswrapper[5043]: I1125 07:27:26.609109 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51d2d2e9-ab00-458f-b284-965e99abbdb3-bundle\") pod \"51d2d2e9-ab00-458f-b284-965e99abbdb3\" (UID: \"51d2d2e9-ab00-458f-b284-965e99abbdb3\") " Nov 25 07:27:26 crc kubenswrapper[5043]: I1125 07:27:26.610072 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51d2d2e9-ab00-458f-b284-965e99abbdb3-bundle" (OuterVolumeSpecName: "bundle") pod "51d2d2e9-ab00-458f-b284-965e99abbdb3" (UID: "51d2d2e9-ab00-458f-b284-965e99abbdb3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:27:26 crc kubenswrapper[5043]: I1125 07:27:26.617444 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d2d2e9-ab00-458f-b284-965e99abbdb3-kube-api-access-sx6bt" (OuterVolumeSpecName: "kube-api-access-sx6bt") pod "51d2d2e9-ab00-458f-b284-965e99abbdb3" (UID: "51d2d2e9-ab00-458f-b284-965e99abbdb3"). InnerVolumeSpecName "kube-api-access-sx6bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:27:26 crc kubenswrapper[5043]: I1125 07:27:26.619046 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51d2d2e9-ab00-458f-b284-965e99abbdb3-util" (OuterVolumeSpecName: "util") pod "51d2d2e9-ab00-458f-b284-965e99abbdb3" (UID: "51d2d2e9-ab00-458f-b284-965e99abbdb3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:27:26 crc kubenswrapper[5043]: I1125 07:27:26.710888 5043 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51d2d2e9-ab00-458f-b284-965e99abbdb3-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:27:26 crc kubenswrapper[5043]: I1125 07:27:26.711007 5043 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51d2d2e9-ab00-458f-b284-965e99abbdb3-util\") on node \"crc\" DevicePath \"\"" Nov 25 07:27:26 crc kubenswrapper[5043]: I1125 07:27:26.711042 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx6bt\" (UniqueName: \"kubernetes.io/projected/51d2d2e9-ab00-458f-b284-965e99abbdb3-kube-api-access-sx6bt\") on node \"crc\" DevicePath \"\"" Nov 25 07:27:27 crc kubenswrapper[5043]: I1125 07:27:27.268751 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" event={"ID":"51d2d2e9-ab00-458f-b284-965e99abbdb3","Type":"ContainerDied","Data":"091f95770046417c8afca08b797bd2a052d711ea911098f6647e2bc330c60db3"} Nov 25 07:27:27 crc kubenswrapper[5043]: I1125 07:27:27.268837 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="091f95770046417c8afca08b797bd2a052d711ea911098f6647e2bc330c60db3" Nov 25 07:27:27 crc kubenswrapper[5043]: I1125 07:27:27.269000 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v" Nov 25 07:27:30 crc kubenswrapper[5043]: I1125 07:27:30.552078 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-q7kcp"] Nov 25 07:27:30 crc kubenswrapper[5043]: E1125 07:27:30.552458 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d2d2e9-ab00-458f-b284-965e99abbdb3" containerName="extract" Nov 25 07:27:30 crc kubenswrapper[5043]: I1125 07:27:30.552469 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d2d2e9-ab00-458f-b284-965e99abbdb3" containerName="extract" Nov 25 07:27:30 crc kubenswrapper[5043]: E1125 07:27:30.552478 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d2d2e9-ab00-458f-b284-965e99abbdb3" containerName="pull" Nov 25 07:27:30 crc kubenswrapper[5043]: I1125 07:27:30.552485 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d2d2e9-ab00-458f-b284-965e99abbdb3" containerName="pull" Nov 25 07:27:30 crc kubenswrapper[5043]: E1125 07:27:30.552498 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d2d2e9-ab00-458f-b284-965e99abbdb3" containerName="util" Nov 25 07:27:30 crc kubenswrapper[5043]: I1125 07:27:30.552504 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d2d2e9-ab00-458f-b284-965e99abbdb3" containerName="util" Nov 25 07:27:30 crc kubenswrapper[5043]: I1125 07:27:30.552591 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d2d2e9-ab00-458f-b284-965e99abbdb3" containerName="extract" Nov 25 07:27:30 crc kubenswrapper[5043]: I1125 07:27:30.552949 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-q7kcp" Nov 25 07:27:30 crc kubenswrapper[5043]: I1125 07:27:30.554980 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 07:27:30 crc kubenswrapper[5043]: I1125 07:27:30.555247 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-fxpzj" Nov 25 07:27:30 crc kubenswrapper[5043]: I1125 07:27:30.555372 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 07:27:30 crc kubenswrapper[5043]: I1125 07:27:30.559548 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpkmc\" (UniqueName: \"kubernetes.io/projected/c86d9095-02e1-450f-9d00-b448049035b1-kube-api-access-tpkmc\") pod \"nmstate-operator-557fdffb88-q7kcp\" (UID: \"c86d9095-02e1-450f-9d00-b448049035b1\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-q7kcp" Nov 25 07:27:30 crc kubenswrapper[5043]: I1125 07:27:30.569173 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-q7kcp"] Nov 25 07:27:30 crc kubenswrapper[5043]: I1125 07:27:30.661408 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpkmc\" (UniqueName: \"kubernetes.io/projected/c86d9095-02e1-450f-9d00-b448049035b1-kube-api-access-tpkmc\") pod \"nmstate-operator-557fdffb88-q7kcp\" (UID: \"c86d9095-02e1-450f-9d00-b448049035b1\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-q7kcp" Nov 25 07:27:30 crc kubenswrapper[5043]: I1125 07:27:30.678414 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpkmc\" (UniqueName: \"kubernetes.io/projected/c86d9095-02e1-450f-9d00-b448049035b1-kube-api-access-tpkmc\") pod \"nmstate-operator-557fdffb88-q7kcp\" (UID: \"c86d9095-02e1-450f-9d00-b448049035b1\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-q7kcp" Nov 25 07:27:30 crc kubenswrapper[5043]: I1125 07:27:30.873226 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-q7kcp" Nov 25 07:27:31 crc kubenswrapper[5043]: I1125 07:27:31.134129 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-q7kcp"] Nov 25 07:27:31 crc kubenswrapper[5043]: I1125 07:27:31.293205 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-q7kcp" event={"ID":"c86d9095-02e1-450f-9d00-b448049035b1","Type":"ContainerStarted","Data":"766d44fb1bc3ccaa2d5581176be22122d62fa012716d9fd42aed4bb47e6ec8e6"} Nov 25 07:27:34 crc kubenswrapper[5043]: I1125 07:27:34.318999 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-q7kcp" event={"ID":"c86d9095-02e1-450f-9d00-b448049035b1","Type":"ContainerStarted","Data":"3e9309d3c2a8103ae0a1b8844215b81e737008ff6af5d5c97c94f9cb0b9d8937"} Nov 25 07:27:34 crc kubenswrapper[5043]: I1125 07:27:34.349159 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-q7kcp" podStartSLOduration=2.226432291 podStartE2EDuration="4.349137539s" podCreationTimestamp="2025-11-25 07:27:30 +0000 UTC" firstStartedPulling="2025-11-25 07:27:31.157933915 +0000 UTC m=+715.326129636" lastFinishedPulling="2025-11-25 07:27:33.280639163 +0000 UTC m=+717.448834884" observedRunningTime="2025-11-25 07:27:34.346519199 +0000 UTC m=+718.514714960" watchObservedRunningTime="2025-11-25 07:27:34.349137539 +0000 UTC m=+718.517333270" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.381337 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-jcbrg"] Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.383375 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-jcbrg" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.384989 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-rk77g"] Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.385649 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rk77g" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.386964 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-6bt77" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.387519 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.399533 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-jcbrg"] Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.405677 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-rk77g"] Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.411539 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-ddx45"] Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.412381 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ddx45" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.488401 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkq2n\" (UniqueName: \"kubernetes.io/projected/9cece50c-ecd0-4349-8c5f-26d814c988c0-kube-api-access-xkq2n\") pod \"nmstate-metrics-5dcf9c57c5-jcbrg\" (UID: \"9cece50c-ecd0-4349-8c5f-26d814c988c0\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-jcbrg" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.488447 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp8zl\" (UniqueName: \"kubernetes.io/projected/cfab8bc8-7fd5-4a73-a58a-e92b3ad46845-kube-api-access-gp8zl\") pod \"nmstate-webhook-6b89b748d8-rk77g\" (UID: \"cfab8bc8-7fd5-4a73-a58a-e92b3ad46845\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rk77g" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.488780 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cfab8bc8-7fd5-4a73-a58a-e92b3ad46845-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-rk77g\" (UID: \"cfab8bc8-7fd5-4a73-a58a-e92b3ad46845\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rk77g" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.522540 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8rr2k"] Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.523370 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8rr2k" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.525194 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.525328 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-2qvs2" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.527265 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.530919 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8rr2k"] Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.590196 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkq2n\" (UniqueName: \"kubernetes.io/projected/9cece50c-ecd0-4349-8c5f-26d814c988c0-kube-api-access-xkq2n\") pod \"nmstate-metrics-5dcf9c57c5-jcbrg\" (UID: \"9cece50c-ecd0-4349-8c5f-26d814c988c0\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-jcbrg" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.590241 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp8zl\" (UniqueName: \"kubernetes.io/projected/cfab8bc8-7fd5-4a73-a58a-e92b3ad46845-kube-api-access-gp8zl\") pod \"nmstate-webhook-6b89b748d8-rk77g\" (UID: \"cfab8bc8-7fd5-4a73-a58a-e92b3ad46845\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rk77g" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.590273 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/eca9619f-360b-466d-9413-cce43ac0e5de-dbus-socket\") pod \"nmstate-handler-ddx45\" (UID: \"eca9619f-360b-466d-9413-cce43ac0e5de\") " pod="openshift-nmstate/nmstate-handler-ddx45" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.590323 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cfab8bc8-7fd5-4a73-a58a-e92b3ad46845-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-rk77g\" (UID: \"cfab8bc8-7fd5-4a73-a58a-e92b3ad46845\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rk77g" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.590340 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbvgg\" (UniqueName: \"kubernetes.io/projected/eca9619f-360b-466d-9413-cce43ac0e5de-kube-api-access-dbvgg\") pod \"nmstate-handler-ddx45\" (UID: \"eca9619f-360b-466d-9413-cce43ac0e5de\") " pod="openshift-nmstate/nmstate-handler-ddx45" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.590360 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/eca9619f-360b-466d-9413-cce43ac0e5de-ovs-socket\") pod \"nmstate-handler-ddx45\" (UID: \"eca9619f-360b-466d-9413-cce43ac0e5de\") " pod="openshift-nmstate/nmstate-handler-ddx45" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.590395 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/eca9619f-360b-466d-9413-cce43ac0e5de-nmstate-lock\") pod \"nmstate-handler-ddx45\" (UID: \"eca9619f-360b-466d-9413-cce43ac0e5de\") " pod="openshift-nmstate/nmstate-handler-ddx45" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.596472 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cfab8bc8-7fd5-4a73-a58a-e92b3ad46845-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-rk77g\" (UID: \"cfab8bc8-7fd5-4a73-a58a-e92b3ad46845\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rk77g" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.604583 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp8zl\" (UniqueName: \"kubernetes.io/projected/cfab8bc8-7fd5-4a73-a58a-e92b3ad46845-kube-api-access-gp8zl\") pod \"nmstate-webhook-6b89b748d8-rk77g\" (UID: \"cfab8bc8-7fd5-4a73-a58a-e92b3ad46845\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rk77g" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.607264 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkq2n\" (UniqueName: \"kubernetes.io/projected/9cece50c-ecd0-4349-8c5f-26d814c988c0-kube-api-access-xkq2n\") pod \"nmstate-metrics-5dcf9c57c5-jcbrg\" (UID: \"9cece50c-ecd0-4349-8c5f-26d814c988c0\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-jcbrg" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.691866 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t4ks\" (UniqueName: \"kubernetes.io/projected/484bc3f6-cd90-415a-99d9-0496929f73f7-kube-api-access-5t4ks\") pod \"nmstate-console-plugin-5874bd7bc5-8rr2k\" (UID: \"484bc3f6-cd90-415a-99d9-0496929f73f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8rr2k" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.692256 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/eca9619f-360b-466d-9413-cce43ac0e5de-nmstate-lock\") pod \"nmstate-handler-ddx45\" (UID: \"eca9619f-360b-466d-9413-cce43ac0e5de\") " pod="openshift-nmstate/nmstate-handler-ddx45" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.692328 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/484bc3f6-cd90-415a-99d9-0496929f73f7-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-8rr2k\" (UID: \"484bc3f6-cd90-415a-99d9-0496929f73f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8rr2k" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.692396 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/eca9619f-360b-466d-9413-cce43ac0e5de-dbus-socket\") pod \"nmstate-handler-ddx45\" (UID: \"eca9619f-360b-466d-9413-cce43ac0e5de\") " pod="openshift-nmstate/nmstate-handler-ddx45" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.692422 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/484bc3f6-cd90-415a-99d9-0496929f73f7-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-8rr2k\" (UID: \"484bc3f6-cd90-415a-99d9-0496929f73f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8rr2k" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.692462 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbvgg\" (UniqueName: \"kubernetes.io/projected/eca9619f-360b-466d-9413-cce43ac0e5de-kube-api-access-dbvgg\") pod \"nmstate-handler-ddx45\" (UID: \"eca9619f-360b-466d-9413-cce43ac0e5de\") " pod="openshift-nmstate/nmstate-handler-ddx45" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.692490 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/eca9619f-360b-466d-9413-cce43ac0e5de-ovs-socket\") pod \"nmstate-handler-ddx45\" (UID: \"eca9619f-360b-466d-9413-cce43ac0e5de\") " pod="openshift-nmstate/nmstate-handler-ddx45" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.692609 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/eca9619f-360b-466d-9413-cce43ac0e5de-ovs-socket\") pod \"nmstate-handler-ddx45\" (UID: \"eca9619f-360b-466d-9413-cce43ac0e5de\") " pod="openshift-nmstate/nmstate-handler-ddx45" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.692658 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/eca9619f-360b-466d-9413-cce43ac0e5de-nmstate-lock\") pod \"nmstate-handler-ddx45\" (UID: \"eca9619f-360b-466d-9413-cce43ac0e5de\") " pod="openshift-nmstate/nmstate-handler-ddx45" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.692995 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/eca9619f-360b-466d-9413-cce43ac0e5de-dbus-socket\") pod \"nmstate-handler-ddx45\" (UID: \"eca9619f-360b-466d-9413-cce43ac0e5de\") " pod="openshift-nmstate/nmstate-handler-ddx45" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.704910 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-jcbrg" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.712900 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rk77g" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.721374 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbvgg\" (UniqueName: \"kubernetes.io/projected/eca9619f-360b-466d-9413-cce43ac0e5de-kube-api-access-dbvgg\") pod \"nmstate-handler-ddx45\" (UID: \"eca9619f-360b-466d-9413-cce43ac0e5de\") " pod="openshift-nmstate/nmstate-handler-ddx45" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.727906 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-75c876967f-b49ls"] Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.728694 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.736776 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ddx45" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.791972 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75c876967f-b49ls"] Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.793028 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-service-ca\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.793084 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/484bc3f6-cd90-415a-99d9-0496929f73f7-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-8rr2k\" (UID: \"484bc3f6-cd90-415a-99d9-0496929f73f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8rr2k" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.793111 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-console-serving-cert\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.793137 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4x2c\" (UniqueName: \"kubernetes.io/projected/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-kube-api-access-l4x2c\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.793170 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/484bc3f6-cd90-415a-99d9-0496929f73f7-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-8rr2k\" (UID: \"484bc3f6-cd90-415a-99d9-0496929f73f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8rr2k" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.793191 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-console-config\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.793207 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-trusted-ca-bundle\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.793250 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-oauth-serving-cert\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.793272 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t4ks\" (UniqueName: \"kubernetes.io/projected/484bc3f6-cd90-415a-99d9-0496929f73f7-kube-api-access-5t4ks\") pod \"nmstate-console-plugin-5874bd7bc5-8rr2k\" (UID: \"484bc3f6-cd90-415a-99d9-0496929f73f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8rr2k" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.793288 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-console-oauth-config\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.794375 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/484bc3f6-cd90-415a-99d9-0496929f73f7-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-8rr2k\" (UID: \"484bc3f6-cd90-415a-99d9-0496929f73f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8rr2k" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.798052 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/484bc3f6-cd90-415a-99d9-0496929f73f7-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-8rr2k\" (UID: \"484bc3f6-cd90-415a-99d9-0496929f73f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8rr2k" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.815246 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t4ks\" (UniqueName: \"kubernetes.io/projected/484bc3f6-cd90-415a-99d9-0496929f73f7-kube-api-access-5t4ks\") pod \"nmstate-console-plugin-5874bd7bc5-8rr2k\" (UID: \"484bc3f6-cd90-415a-99d9-0496929f73f7\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8rr2k" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.840800 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8rr2k" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.894173 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-console-serving-cert\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.894485 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4x2c\" (UniqueName: \"kubernetes.io/projected/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-kube-api-access-l4x2c\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.894509 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-console-config\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.894528 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-trusted-ca-bundle\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.894557 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-oauth-serving-cert\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.894576 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-console-oauth-config\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.894647 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-service-ca\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.895662 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-oauth-serving-cert\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.895671 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-service-ca\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.895714 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-console-config\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.897229 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-trusted-ca-bundle\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.898266 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-console-serving-cert\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.906310 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-console-oauth-config\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.911015 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4x2c\" (UniqueName: \"kubernetes.io/projected/00ffde50-2dbd-4103-9f7a-ff11dd2b84c4-kube-api-access-l4x2c\") pod \"console-75c876967f-b49ls\" (UID: \"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4\") " pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:40 crc kubenswrapper[5043]: I1125 07:27:40.970744 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-rk77g"] Nov 25 07:27:40 crc kubenswrapper[5043]: W1125 07:27:40.971118 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfab8bc8_7fd5_4a73_a58a_e92b3ad46845.slice/crio-9cdce143a4ca752c7802e79e7c5c30928b66793e5de824e869a005befa786366 WatchSource:0}: Error finding container 9cdce143a4ca752c7802e79e7c5c30928b66793e5de824e869a005befa786366: Status 404 returned error can't find the container with id 9cdce143a4ca752c7802e79e7c5c30928b66793e5de824e869a005befa786366 Nov 25 07:27:41 crc kubenswrapper[5043]: I1125 07:27:41.046180 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8rr2k"] Nov 25 07:27:41 crc kubenswrapper[5043]: W1125 07:27:41.053337 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod484bc3f6_cd90_415a_99d9_0496929f73f7.slice/crio-890b0342c8921bb531a44aa8a16f4189c9e7825b341b1afc548e0aa02610857c WatchSource:0}: Error finding container 890b0342c8921bb531a44aa8a16f4189c9e7825b341b1afc548e0aa02610857c: Status 404 returned error can't find the container with id 890b0342c8921bb531a44aa8a16f4189c9e7825b341b1afc548e0aa02610857c Nov 25 07:27:41 crc kubenswrapper[5043]: I1125 07:27:41.102881 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:41 crc kubenswrapper[5043]: I1125 07:27:41.252495 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-jcbrg"] Nov 25 07:27:41 crc kubenswrapper[5043]: W1125 07:27:41.259192 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cece50c_ecd0_4349_8c5f_26d814c988c0.slice/crio-29b00d17782385447c0faffca8d5772a32c9c5b981df210187ef5274f7c54e8f WatchSource:0}: Error finding container 29b00d17782385447c0faffca8d5772a32c9c5b981df210187ef5274f7c54e8f: Status 404 returned error can't find the container with id 29b00d17782385447c0faffca8d5772a32c9c5b981df210187ef5274f7c54e8f Nov 25 07:27:41 crc kubenswrapper[5043]: I1125 07:27:41.321658 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75c876967f-b49ls"] Nov 25 07:27:41 crc kubenswrapper[5043]: W1125 07:27:41.330103 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00ffde50_2dbd_4103_9f7a_ff11dd2b84c4.slice/crio-61f43592d2b27c319f6cd1322cb6ed3cb988de4df6f107f7738ab723d12a992b WatchSource:0}: Error finding container 61f43592d2b27c319f6cd1322cb6ed3cb988de4df6f107f7738ab723d12a992b: Status 404 returned error can't find the container with id 61f43592d2b27c319f6cd1322cb6ed3cb988de4df6f107f7738ab723d12a992b Nov 25 07:27:41 crc kubenswrapper[5043]: I1125 07:27:41.359715 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rk77g" event={"ID":"cfab8bc8-7fd5-4a73-a58a-e92b3ad46845","Type":"ContainerStarted","Data":"9cdce143a4ca752c7802e79e7c5c30928b66793e5de824e869a005befa786366"} Nov 25 07:27:41 crc kubenswrapper[5043]: I1125 07:27:41.360595 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8rr2k" event={"ID":"484bc3f6-cd90-415a-99d9-0496929f73f7","Type":"ContainerStarted","Data":"890b0342c8921bb531a44aa8a16f4189c9e7825b341b1afc548e0aa02610857c"} Nov 25 07:27:41 crc kubenswrapper[5043]: I1125 07:27:41.361567 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75c876967f-b49ls" event={"ID":"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4","Type":"ContainerStarted","Data":"61f43592d2b27c319f6cd1322cb6ed3cb988de4df6f107f7738ab723d12a992b"} Nov 25 07:27:41 crc kubenswrapper[5043]: I1125 07:27:41.362716 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-jcbrg" event={"ID":"9cece50c-ecd0-4349-8c5f-26d814c988c0","Type":"ContainerStarted","Data":"29b00d17782385447c0faffca8d5772a32c9c5b981df210187ef5274f7c54e8f"} Nov 25 07:27:41 crc kubenswrapper[5043]: I1125 07:27:41.363725 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ddx45" event={"ID":"eca9619f-360b-466d-9413-cce43ac0e5de","Type":"ContainerStarted","Data":"2642818457b5dac71e31bff5838047f3b978b3c31e05e688c8964722d3d9e4e2"} Nov 25 07:27:42 crc kubenswrapper[5043]: I1125 07:27:42.372046 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75c876967f-b49ls" event={"ID":"00ffde50-2dbd-4103-9f7a-ff11dd2b84c4","Type":"ContainerStarted","Data":"5ebb3c1285ea6317266d25aafdc35fd3a0918f4ebffa09801497c360fdcfddc5"} Nov 25 07:27:42 crc kubenswrapper[5043]: I1125 07:27:42.395291 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-75c876967f-b49ls" podStartSLOduration=2.395268684 podStartE2EDuration="2.395268684s" podCreationTimestamp="2025-11-25 07:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:27:42.393716733 +0000 UTC m=+726.561912474" watchObservedRunningTime="2025-11-25 07:27:42.395268684 +0000 UTC m=+726.563464405" Nov 25 07:27:44 crc kubenswrapper[5043]: I1125 07:27:44.381929 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8rr2k" event={"ID":"484bc3f6-cd90-415a-99d9-0496929f73f7","Type":"ContainerStarted","Data":"876629ab477c6b9fa27aeff74c1b3cb0c352937bbbd166d861606e75451f3fd4"} Nov 25 07:27:44 crc kubenswrapper[5043]: I1125 07:27:44.383094 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-jcbrg" event={"ID":"9cece50c-ecd0-4349-8c5f-26d814c988c0","Type":"ContainerStarted","Data":"c3f7f7dd22af22ff3db28e83c138f7dd0e36edbf9d17438e9b803be9b545a640"} Nov 25 07:27:44 crc kubenswrapper[5043]: I1125 07:27:44.385103 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ddx45" event={"ID":"eca9619f-360b-466d-9413-cce43ac0e5de","Type":"ContainerStarted","Data":"f66006279a159faaaa8dcc6dec4576280823f56d9dfd205c5ed79ef2a12c98c3"} Nov 25 07:27:44 crc kubenswrapper[5043]: I1125 07:27:44.385180 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-ddx45" Nov 25 07:27:44 crc kubenswrapper[5043]: I1125 07:27:44.386775 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rk77g" event={"ID":"cfab8bc8-7fd5-4a73-a58a-e92b3ad46845","Type":"ContainerStarted","Data":"ac977c942f5705c3c3433b347127b24747fed7e16706f02544926e97842bcd5c"} Nov 25 07:27:44 crc kubenswrapper[5043]: I1125 07:27:44.386865 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rk77g" Nov 25 07:27:44 crc kubenswrapper[5043]: I1125 07:27:44.400289 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-8rr2k" podStartSLOduration=1.9442146949999999 podStartE2EDuration="4.400263968s" podCreationTimestamp="2025-11-25 07:27:40 +0000 UTC" firstStartedPulling="2025-11-25 07:27:41.055243012 +0000 UTC m=+725.223438733" lastFinishedPulling="2025-11-25 07:27:43.511292285 +0000 UTC m=+727.679488006" observedRunningTime="2025-11-25 07:27:44.395886581 +0000 UTC m=+728.564082462" watchObservedRunningTime="2025-11-25 07:27:44.400263968 +0000 UTC m=+728.568459689" Nov 25 07:27:44 crc kubenswrapper[5043]: I1125 07:27:44.413258 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-ddx45" podStartSLOduration=1.667616185 podStartE2EDuration="4.413239507s" podCreationTimestamp="2025-11-25 07:27:40 +0000 UTC" firstStartedPulling="2025-11-25 07:27:40.794794302 +0000 UTC m=+724.962990023" lastFinishedPulling="2025-11-25 07:27:43.540417604 +0000 UTC m=+727.708613345" observedRunningTime="2025-11-25 07:27:44.413227336 +0000 UTC m=+728.581423057" watchObservedRunningTime="2025-11-25 07:27:44.413239507 +0000 UTC m=+728.581435228" Nov 25 07:27:44 crc kubenswrapper[5043]: I1125 07:27:44.433816 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rk77g" podStartSLOduration=1.846884508 podStartE2EDuration="4.433793927s" podCreationTimestamp="2025-11-25 07:27:40 +0000 UTC" firstStartedPulling="2025-11-25 07:27:40.972802533 +0000 UTC m=+725.140998254" lastFinishedPulling="2025-11-25 07:27:43.559711942 +0000 UTC m=+727.727907673" observedRunningTime="2025-11-25 07:27:44.427860728 +0000 UTC m=+728.596056439" watchObservedRunningTime="2025-11-25 07:27:44.433793927 +0000 UTC m=+728.601989658" Nov 25 07:27:46 crc kubenswrapper[5043]: I1125 07:27:46.401832 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-jcbrg" event={"ID":"9cece50c-ecd0-4349-8c5f-26d814c988c0","Type":"ContainerStarted","Data":"5c70ca953f87cca56f726ee2a95d5e55bf43152b65fbdd098df80dac24765ed3"} Nov 25 07:27:50 crc kubenswrapper[5043]: I1125 07:27:50.767330 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-ddx45" Nov 25 07:27:50 crc kubenswrapper[5043]: I1125 07:27:50.788079 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-jcbrg" podStartSLOduration=6.373378539 podStartE2EDuration="10.788060972s" podCreationTimestamp="2025-11-25 07:27:40 +0000 UTC" firstStartedPulling="2025-11-25 07:27:41.262842366 +0000 UTC m=+725.431038087" lastFinishedPulling="2025-11-25 07:27:45.677524799 +0000 UTC m=+729.845720520" observedRunningTime="2025-11-25 07:27:46.423715087 +0000 UTC m=+730.591910818" watchObservedRunningTime="2025-11-25 07:27:50.788060972 +0000 UTC m=+734.956256693" Nov 25 07:27:51 crc kubenswrapper[5043]: I1125 07:27:51.103252 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:51 crc kubenswrapper[5043]: I1125 07:27:51.103798 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:51 crc kubenswrapper[5043]: I1125 07:27:51.109033 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:51 crc kubenswrapper[5043]: I1125 07:27:51.444278 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-75c876967f-b49ls" Nov 25 07:27:51 crc kubenswrapper[5043]: I1125 07:27:51.518421 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-lbz4p"] Nov 25 07:28:00 crc kubenswrapper[5043]: I1125 07:28:00.722878 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rk77g" Nov 25 07:28:13 crc kubenswrapper[5043]: I1125 07:28:13.116084 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6"] Nov 25 07:28:13 crc kubenswrapper[5043]: I1125 07:28:13.119236 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" Nov 25 07:28:13 crc kubenswrapper[5043]: I1125 07:28:13.121792 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 07:28:13 crc kubenswrapper[5043]: I1125 07:28:13.131664 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6"] Nov 25 07:28:13 crc kubenswrapper[5043]: I1125 07:28:13.155067 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6\" (UID: \"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" Nov 25 07:28:13 crc kubenswrapper[5043]: I1125 07:28:13.155159 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6\" (UID: \"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" Nov 25 07:28:13 crc kubenswrapper[5043]: I1125 07:28:13.155215 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx66m\" (UniqueName: \"kubernetes.io/projected/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-kube-api-access-sx66m\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6\" (UID: \"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" Nov 25 07:28:13 crc kubenswrapper[5043]: I1125 07:28:13.256830 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6\" (UID: \"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" Nov 25 07:28:13 crc kubenswrapper[5043]: I1125 07:28:13.257136 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx66m\" (UniqueName: \"kubernetes.io/projected/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-kube-api-access-sx66m\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6\" (UID: \"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" Nov 25 07:28:13 crc kubenswrapper[5043]: I1125 07:28:13.257244 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6\" (UID: \"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" Nov 25 07:28:13 crc kubenswrapper[5043]: I1125 07:28:13.257487 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6\" (UID: \"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" Nov 25 07:28:13 crc kubenswrapper[5043]: I1125 07:28:13.257578 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6\" (UID: \"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" Nov 25 07:28:13 crc kubenswrapper[5043]: I1125 07:28:13.279258 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx66m\" (UniqueName: \"kubernetes.io/projected/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-kube-api-access-sx66m\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6\" (UID: \"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" Nov 25 07:28:13 crc kubenswrapper[5043]: I1125 07:28:13.445502 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" Nov 25 07:28:13 crc kubenswrapper[5043]: I1125 07:28:13.917639 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6"] Nov 25 07:28:14 crc kubenswrapper[5043]: I1125 07:28:14.600355 5043 generic.go:334] "Generic (PLEG): container finished" podID="75d56d2d-27c2-4a6d-9f9f-3975af3a6bed" containerID="8994612ffc2bae035a41b9a533dcf73041e204bc428bb4a25f5c8e54321d66f9" exitCode=0 Nov 25 07:28:14 crc kubenswrapper[5043]: I1125 07:28:14.600510 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" event={"ID":"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed","Type":"ContainerDied","Data":"8994612ffc2bae035a41b9a533dcf73041e204bc428bb4a25f5c8e54321d66f9"} Nov 25 07:28:14 crc kubenswrapper[5043]: I1125 07:28:14.600655 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" event={"ID":"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed","Type":"ContainerStarted","Data":"63d2756d0918c6c1f349451eed134bda43002e697678b8124d3f6caff7d2ebb3"} Nov 25 07:28:14 crc kubenswrapper[5043]: I1125 07:28:14.633502 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dfsn8"] Nov 25 07:28:14 crc kubenswrapper[5043]: I1125 07:28:14.633864 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" podUID="e0f07e95-4043-41c1-9f91-b79a6f7b9bbc" containerName="controller-manager" containerID="cri-o://21c18547d4cc2b8ca1874d3452249d660033deed4fb42e10d00ead1d19e43ab4" gracePeriod=30 Nov 25 07:28:14 crc kubenswrapper[5043]: I1125 07:28:14.759387 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k"] Nov 25 07:28:14 crc kubenswrapper[5043]: I1125 07:28:14.759649 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" podUID="f75a6197-9de8-4720-af31-ebc12fe35e48" containerName="route-controller-manager" containerID="cri-o://324421170ddd789aa79c6e2e6fa02d68ce42a2f4ea248cc93dec7e8ea7a49cab" gracePeriod=30 Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.027529 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.079119 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-client-ca\") pod \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.079164 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-config\") pod \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.079206 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-serving-cert\") pod \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.079275 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-proxy-ca-bundles\") pod \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.079303 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-729pp\" (UniqueName: \"kubernetes.io/projected/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-kube-api-access-729pp\") pod \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\" (UID: \"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc\") " Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.080146 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e0f07e95-4043-41c1-9f91-b79a6f7b9bbc" (UID: "e0f07e95-4043-41c1-9f91-b79a6f7b9bbc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.080182 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-client-ca" (OuterVolumeSpecName: "client-ca") pod "e0f07e95-4043-41c1-9f91-b79a6f7b9bbc" (UID: "e0f07e95-4043-41c1-9f91-b79a6f7b9bbc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.080219 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-config" (OuterVolumeSpecName: "config") pod "e0f07e95-4043-41c1-9f91-b79a6f7b9bbc" (UID: "e0f07e95-4043-41c1-9f91-b79a6f7b9bbc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.088244 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-kube-api-access-729pp" (OuterVolumeSpecName: "kube-api-access-729pp") pod "e0f07e95-4043-41c1-9f91-b79a6f7b9bbc" (UID: "e0f07e95-4043-41c1-9f91-b79a6f7b9bbc"). InnerVolumeSpecName "kube-api-access-729pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.089706 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e0f07e95-4043-41c1-9f91-b79a6f7b9bbc" (UID: "e0f07e95-4043-41c1-9f91-b79a6f7b9bbc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.112559 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.180434 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f75a6197-9de8-4720-af31-ebc12fe35e48-serving-cert\") pod \"f75a6197-9de8-4720-af31-ebc12fe35e48\" (UID: \"f75a6197-9de8-4720-af31-ebc12fe35e48\") " Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.180546 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f75a6197-9de8-4720-af31-ebc12fe35e48-client-ca\") pod \"f75a6197-9de8-4720-af31-ebc12fe35e48\" (UID: \"f75a6197-9de8-4720-af31-ebc12fe35e48\") " Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.180594 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f75a6197-9de8-4720-af31-ebc12fe35e48-config\") pod \"f75a6197-9de8-4720-af31-ebc12fe35e48\" (UID: \"f75a6197-9de8-4720-af31-ebc12fe35e48\") " Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.180682 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj9z5\" (UniqueName: \"kubernetes.io/projected/f75a6197-9de8-4720-af31-ebc12fe35e48-kube-api-access-wj9z5\") pod \"f75a6197-9de8-4720-af31-ebc12fe35e48\" (UID: \"f75a6197-9de8-4720-af31-ebc12fe35e48\") " Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.180909 5043 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.180926 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.180934 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.180944 5043 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.180955 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-729pp\" (UniqueName: \"kubernetes.io/projected/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc-kube-api-access-729pp\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.181522 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f75a6197-9de8-4720-af31-ebc12fe35e48-client-ca" (OuterVolumeSpecName: "client-ca") pod "f75a6197-9de8-4720-af31-ebc12fe35e48" (UID: "f75a6197-9de8-4720-af31-ebc12fe35e48"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.181554 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f75a6197-9de8-4720-af31-ebc12fe35e48-config" (OuterVolumeSpecName: "config") pod "f75a6197-9de8-4720-af31-ebc12fe35e48" (UID: "f75a6197-9de8-4720-af31-ebc12fe35e48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.183745 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f75a6197-9de8-4720-af31-ebc12fe35e48-kube-api-access-wj9z5" (OuterVolumeSpecName: "kube-api-access-wj9z5") pod "f75a6197-9de8-4720-af31-ebc12fe35e48" (UID: "f75a6197-9de8-4720-af31-ebc12fe35e48"). InnerVolumeSpecName "kube-api-access-wj9z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.183975 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f75a6197-9de8-4720-af31-ebc12fe35e48-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f75a6197-9de8-4720-af31-ebc12fe35e48" (UID: "f75a6197-9de8-4720-af31-ebc12fe35e48"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.282598 5043 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f75a6197-9de8-4720-af31-ebc12fe35e48-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.282662 5043 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f75a6197-9de8-4720-af31-ebc12fe35e48-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.282674 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f75a6197-9de8-4720-af31-ebc12fe35e48-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.282686 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj9z5\" (UniqueName: \"kubernetes.io/projected/f75a6197-9de8-4720-af31-ebc12fe35e48-kube-api-access-wj9z5\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.608189 5043 generic.go:334] "Generic (PLEG): container finished" podID="e0f07e95-4043-41c1-9f91-b79a6f7b9bbc" containerID="21c18547d4cc2b8ca1874d3452249d660033deed4fb42e10d00ead1d19e43ab4" exitCode=0 Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.608274 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" event={"ID":"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc","Type":"ContainerDied","Data":"21c18547d4cc2b8ca1874d3452249d660033deed4fb42e10d00ead1d19e43ab4"} Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.608300 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" event={"ID":"e0f07e95-4043-41c1-9f91-b79a6f7b9bbc","Type":"ContainerDied","Data":"81ee5b9c0e3b7721b9c49c79f1fe0089e945ea1fef0f68819458f8ad454c94e4"} Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.608316 5043 scope.go:117] "RemoveContainer" containerID="21c18547d4cc2b8ca1874d3452249d660033deed4fb42e10d00ead1d19e43ab4" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.608424 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dfsn8" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.613865 5043 generic.go:334] "Generic (PLEG): container finished" podID="f75a6197-9de8-4720-af31-ebc12fe35e48" containerID="324421170ddd789aa79c6e2e6fa02d68ce42a2f4ea248cc93dec7e8ea7a49cab" exitCode=0 Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.613947 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" event={"ID":"f75a6197-9de8-4720-af31-ebc12fe35e48","Type":"ContainerDied","Data":"324421170ddd789aa79c6e2e6fa02d68ce42a2f4ea248cc93dec7e8ea7a49cab"} Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.614001 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" event={"ID":"f75a6197-9de8-4720-af31-ebc12fe35e48","Type":"ContainerDied","Data":"d17f8caae9b53749642a3f8b973a57d05ee6e852b8251047d4dac746297f00df"} Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.613998 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.632895 5043 scope.go:117] "RemoveContainer" containerID="21c18547d4cc2b8ca1874d3452249d660033deed4fb42e10d00ead1d19e43ab4" Nov 25 07:28:15 crc kubenswrapper[5043]: E1125 07:28:15.634284 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21c18547d4cc2b8ca1874d3452249d660033deed4fb42e10d00ead1d19e43ab4\": container with ID starting with 21c18547d4cc2b8ca1874d3452249d660033deed4fb42e10d00ead1d19e43ab4 not found: ID does not exist" containerID="21c18547d4cc2b8ca1874d3452249d660033deed4fb42e10d00ead1d19e43ab4" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.634402 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21c18547d4cc2b8ca1874d3452249d660033deed4fb42e10d00ead1d19e43ab4"} err="failed to get container status \"21c18547d4cc2b8ca1874d3452249d660033deed4fb42e10d00ead1d19e43ab4\": rpc error: code = NotFound desc = could not find container \"21c18547d4cc2b8ca1874d3452249d660033deed4fb42e10d00ead1d19e43ab4\": container with ID starting with 21c18547d4cc2b8ca1874d3452249d660033deed4fb42e10d00ead1d19e43ab4 not found: ID does not exist" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.634519 5043 scope.go:117] "RemoveContainer" containerID="324421170ddd789aa79c6e2e6fa02d68ce42a2f4ea248cc93dec7e8ea7a49cab" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.645969 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dfsn8"] Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.652229 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dfsn8"] Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.659179 5043 scope.go:117] "RemoveContainer" containerID="324421170ddd789aa79c6e2e6fa02d68ce42a2f4ea248cc93dec7e8ea7a49cab" Nov 25 07:28:15 crc kubenswrapper[5043]: E1125 07:28:15.659622 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"324421170ddd789aa79c6e2e6fa02d68ce42a2f4ea248cc93dec7e8ea7a49cab\": container with ID starting with 324421170ddd789aa79c6e2e6fa02d68ce42a2f4ea248cc93dec7e8ea7a49cab not found: ID does not exist" containerID="324421170ddd789aa79c6e2e6fa02d68ce42a2f4ea248cc93dec7e8ea7a49cab" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.659651 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324421170ddd789aa79c6e2e6fa02d68ce42a2f4ea248cc93dec7e8ea7a49cab"} err="failed to get container status \"324421170ddd789aa79c6e2e6fa02d68ce42a2f4ea248cc93dec7e8ea7a49cab\": rpc error: code = NotFound desc = could not find container \"324421170ddd789aa79c6e2e6fa02d68ce42a2f4ea248cc93dec7e8ea7a49cab\": container with ID starting with 324421170ddd789aa79c6e2e6fa02d68ce42a2f4ea248cc93dec7e8ea7a49cab not found: ID does not exist" Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.680921 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k"] Nov 25 07:28:15 crc kubenswrapper[5043]: I1125 07:28:15.684640 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hpq4k"] Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.068372 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf"] Nov 25 07:28:16 crc kubenswrapper[5043]: E1125 07:28:16.068915 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f75a6197-9de8-4720-af31-ebc12fe35e48" containerName="route-controller-manager" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.068952 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="f75a6197-9de8-4720-af31-ebc12fe35e48" containerName="route-controller-manager" Nov 25 07:28:16 crc kubenswrapper[5043]: E1125 07:28:16.068990 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f07e95-4043-41c1-9f91-b79a6f7b9bbc" containerName="controller-manager" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.069005 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f07e95-4043-41c1-9f91-b79a6f7b9bbc" containerName="controller-manager" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.069244 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f07e95-4043-41c1-9f91-b79a6f7b9bbc" containerName="controller-manager" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.069285 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="f75a6197-9de8-4720-af31-ebc12fe35e48" containerName="route-controller-manager" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.070216 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.072725 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.073242 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq"] Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.074200 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.074416 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.074439 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.074462 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.074624 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.074929 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.077529 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.077722 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.078414 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.078485 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.078715 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.078744 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.088247 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.091429 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq"] Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.092077 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d-config\") pod \"controller-manager-8f9db6b8c-x4cbf\" (UID: \"a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d\") " pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.092164 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d-proxy-ca-bundles\") pod \"controller-manager-8f9db6b8c-x4cbf\" (UID: \"a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d\") " pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.092267 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w9ht\" (UniqueName: \"kubernetes.io/projected/a7b68a25-5f88-43a9-9d79-0706c3ae4e27-kube-api-access-8w9ht\") pod \"route-controller-manager-5df574ddf5-t6rkq\" (UID: \"a7b68a25-5f88-43a9-9d79-0706c3ae4e27\") " pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.092298 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7b68a25-5f88-43a9-9d79-0706c3ae4e27-serving-cert\") pod \"route-controller-manager-5df574ddf5-t6rkq\" (UID: \"a7b68a25-5f88-43a9-9d79-0706c3ae4e27\") " pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.092333 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7b68a25-5f88-43a9-9d79-0706c3ae4e27-config\") pod \"route-controller-manager-5df574ddf5-t6rkq\" (UID: \"a7b68a25-5f88-43a9-9d79-0706c3ae4e27\") " pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.092361 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d-client-ca\") pod \"controller-manager-8f9db6b8c-x4cbf\" (UID: \"a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d\") " pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.092390 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d-serving-cert\") pod \"controller-manager-8f9db6b8c-x4cbf\" (UID: \"a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d\") " pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.092434 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7b68a25-5f88-43a9-9d79-0706c3ae4e27-client-ca\") pod \"route-controller-manager-5df574ddf5-t6rkq\" (UID: \"a7b68a25-5f88-43a9-9d79-0706c3ae4e27\") " pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.092490 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvcwq\" (UniqueName: \"kubernetes.io/projected/a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d-kube-api-access-vvcwq\") pod \"controller-manager-8f9db6b8c-x4cbf\" (UID: \"a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d\") " pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.095234 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf"] Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.193364 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d-serving-cert\") pod \"controller-manager-8f9db6b8c-x4cbf\" (UID: \"a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d\") " pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.193416 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7b68a25-5f88-43a9-9d79-0706c3ae4e27-config\") pod \"route-controller-manager-5df574ddf5-t6rkq\" (UID: \"a7b68a25-5f88-43a9-9d79-0706c3ae4e27\") " pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.193441 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d-client-ca\") pod \"controller-manager-8f9db6b8c-x4cbf\" (UID: \"a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d\") " pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.193469 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7b68a25-5f88-43a9-9d79-0706c3ae4e27-client-ca\") pod \"route-controller-manager-5df574ddf5-t6rkq\" (UID: \"a7b68a25-5f88-43a9-9d79-0706c3ae4e27\") " pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.193490 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvcwq\" (UniqueName: \"kubernetes.io/projected/a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d-kube-api-access-vvcwq\") pod \"controller-manager-8f9db6b8c-x4cbf\" (UID: \"a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d\") " pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.193535 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d-config\") pod \"controller-manager-8f9db6b8c-x4cbf\" (UID: \"a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d\") " pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.193567 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d-proxy-ca-bundles\") pod \"controller-manager-8f9db6b8c-x4cbf\" (UID: \"a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d\") " pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.193643 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w9ht\" (UniqueName: \"kubernetes.io/projected/a7b68a25-5f88-43a9-9d79-0706c3ae4e27-kube-api-access-8w9ht\") pod \"route-controller-manager-5df574ddf5-t6rkq\" (UID: \"a7b68a25-5f88-43a9-9d79-0706c3ae4e27\") " pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.193667 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7b68a25-5f88-43a9-9d79-0706c3ae4e27-serving-cert\") pod \"route-controller-manager-5df574ddf5-t6rkq\" (UID: \"a7b68a25-5f88-43a9-9d79-0706c3ae4e27\") " pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.195045 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7b68a25-5f88-43a9-9d79-0706c3ae4e27-client-ca\") pod \"route-controller-manager-5df574ddf5-t6rkq\" (UID: \"a7b68a25-5f88-43a9-9d79-0706c3ae4e27\") " pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.195195 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d-client-ca\") pod \"controller-manager-8f9db6b8c-x4cbf\" (UID: \"a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d\") " pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.195760 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7b68a25-5f88-43a9-9d79-0706c3ae4e27-config\") pod \"route-controller-manager-5df574ddf5-t6rkq\" (UID: \"a7b68a25-5f88-43a9-9d79-0706c3ae4e27\") " pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.196132 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d-config\") pod \"controller-manager-8f9db6b8c-x4cbf\" (UID: \"a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d\") " pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.196154 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d-proxy-ca-bundles\") pod \"controller-manager-8f9db6b8c-x4cbf\" (UID: \"a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d\") " pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.199939 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7b68a25-5f88-43a9-9d79-0706c3ae4e27-serving-cert\") pod \"route-controller-manager-5df574ddf5-t6rkq\" (UID: \"a7b68a25-5f88-43a9-9d79-0706c3ae4e27\") " pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.200142 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d-serving-cert\") pod \"controller-manager-8f9db6b8c-x4cbf\" (UID: \"a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d\") " pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.214436 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvcwq\" (UniqueName: \"kubernetes.io/projected/a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d-kube-api-access-vvcwq\") pod \"controller-manager-8f9db6b8c-x4cbf\" (UID: \"a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d\") " pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.215514 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w9ht\" (UniqueName: \"kubernetes.io/projected/a7b68a25-5f88-43a9-9d79-0706c3ae4e27-kube-api-access-8w9ht\") pod \"route-controller-manager-5df574ddf5-t6rkq\" (UID: \"a7b68a25-5f88-43a9-9d79-0706c3ae4e27\") " pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.395032 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.400467 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.560678 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-lbz4p" podUID="b18ece39-f2f5-41f9-b2e1-79f9f880791b" containerName="console" containerID="cri-o://33164da69808a1ed3c9ab89d900bcf6b7cd2e1640f5a2602f4f4c9cc2eaae19a" gracePeriod=15 Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.680051 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq"] Nov 25 07:28:16 crc kubenswrapper[5043]: W1125 07:28:16.703466 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7b68a25_5f88_43a9_9d79_0706c3ae4e27.slice/crio-a70f10fd8394f4f0e180bb5263ad18a11b38ec862b7a9455cca4305a7a4d95cb WatchSource:0}: Error finding container a70f10fd8394f4f0e180bb5263ad18a11b38ec862b7a9455cca4305a7a4d95cb: Status 404 returned error can't find the container with id a70f10fd8394f4f0e180bb5263ad18a11b38ec862b7a9455cca4305a7a4d95cb Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.853325 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf"] Nov 25 07:28:16 crc kubenswrapper[5043]: W1125 07:28:16.865963 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6e39cdc_941f_40e3_a6b8_d4ce7498ec1d.slice/crio-ede2d4c86af18a09139e0da68c186ae71282a3e157d83ace9eb6d66913df26bd WatchSource:0}: Error finding container ede2d4c86af18a09139e0da68c186ae71282a3e157d83ace9eb6d66913df26bd: Status 404 returned error can't find the container with id ede2d4c86af18a09139e0da68c186ae71282a3e157d83ace9eb6d66913df26bd Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.974334 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0f07e95-4043-41c1-9f91-b79a6f7b9bbc" path="/var/lib/kubelet/pods/e0f07e95-4043-41c1-9f91-b79a6f7b9bbc/volumes" Nov 25 07:28:16 crc kubenswrapper[5043]: I1125 07:28:16.975430 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f75a6197-9de8-4720-af31-ebc12fe35e48" path="/var/lib/kubelet/pods/f75a6197-9de8-4720-af31-ebc12fe35e48/volumes" Nov 25 07:28:17 crc kubenswrapper[5043]: I1125 07:28:17.276151 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:28:17 crc kubenswrapper[5043]: I1125 07:28:17.276214 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:28:17 crc kubenswrapper[5043]: I1125 07:28:17.638492 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-lbz4p_b18ece39-f2f5-41f9-b2e1-79f9f880791b/console/0.log" Nov 25 07:28:17 crc kubenswrapper[5043]: I1125 07:28:17.638573 5043 generic.go:334] "Generic (PLEG): container finished" podID="b18ece39-f2f5-41f9-b2e1-79f9f880791b" containerID="33164da69808a1ed3c9ab89d900bcf6b7cd2e1640f5a2602f4f4c9cc2eaae19a" exitCode=2 Nov 25 07:28:17 crc kubenswrapper[5043]: I1125 07:28:17.638733 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lbz4p" event={"ID":"b18ece39-f2f5-41f9-b2e1-79f9f880791b","Type":"ContainerDied","Data":"33164da69808a1ed3c9ab89d900bcf6b7cd2e1640f5a2602f4f4c9cc2eaae19a"} Nov 25 07:28:17 crc kubenswrapper[5043]: I1125 07:28:17.640465 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" event={"ID":"a7b68a25-5f88-43a9-9d79-0706c3ae4e27","Type":"ContainerStarted","Data":"a70f10fd8394f4f0e180bb5263ad18a11b38ec862b7a9455cca4305a7a4d95cb"} Nov 25 07:28:17 crc kubenswrapper[5043]: I1125 07:28:17.641555 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" event={"ID":"a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d","Type":"ContainerStarted","Data":"ede2d4c86af18a09139e0da68c186ae71282a3e157d83ace9eb6d66913df26bd"} Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.045574 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-lbz4p_b18ece39-f2f5-41f9-b2e1-79f9f880791b/console/0.log" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.045883 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.118211 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-trusted-ca-bundle\") pod \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.118257 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-config\") pod \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.118307 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-service-ca\") pod \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.118360 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97sch\" (UniqueName: \"kubernetes.io/projected/b18ece39-f2f5-41f9-b2e1-79f9f880791b-kube-api-access-97sch\") pod \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.119099 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-service-ca" (OuterVolumeSpecName: "service-ca") pod "b18ece39-f2f5-41f9-b2e1-79f9f880791b" (UID: "b18ece39-f2f5-41f9-b2e1-79f9f880791b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.119092 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b18ece39-f2f5-41f9-b2e1-79f9f880791b" (UID: "b18ece39-f2f5-41f9-b2e1-79f9f880791b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.119166 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-config" (OuterVolumeSpecName: "console-config") pod "b18ece39-f2f5-41f9-b2e1-79f9f880791b" (UID: "b18ece39-f2f5-41f9-b2e1-79f9f880791b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.119190 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b18ece39-f2f5-41f9-b2e1-79f9f880791b" (UID: "b18ece39-f2f5-41f9-b2e1-79f9f880791b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.118393 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-oauth-serving-cert\") pod \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.119443 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-serving-cert\") pod \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.119490 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-oauth-config\") pod \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\" (UID: \"b18ece39-f2f5-41f9-b2e1-79f9f880791b\") " Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.119832 5043 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.119853 5043 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.119864 5043 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.119875 5043 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b18ece39-f2f5-41f9-b2e1-79f9f880791b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.124088 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b18ece39-f2f5-41f9-b2e1-79f9f880791b" (UID: "b18ece39-f2f5-41f9-b2e1-79f9f880791b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.128085 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b18ece39-f2f5-41f9-b2e1-79f9f880791b" (UID: "b18ece39-f2f5-41f9-b2e1-79f9f880791b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.150781 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b18ece39-f2f5-41f9-b2e1-79f9f880791b-kube-api-access-97sch" (OuterVolumeSpecName: "kube-api-access-97sch") pod "b18ece39-f2f5-41f9-b2e1-79f9f880791b" (UID: "b18ece39-f2f5-41f9-b2e1-79f9f880791b"). InnerVolumeSpecName "kube-api-access-97sch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.221139 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97sch\" (UniqueName: \"kubernetes.io/projected/b18ece39-f2f5-41f9-b2e1-79f9f880791b-kube-api-access-97sch\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.221691 5043 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.221771 5043 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b18ece39-f2f5-41f9-b2e1-79f9f880791b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.649083 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" event={"ID":"a7b68a25-5f88-43a9-9d79-0706c3ae4e27","Type":"ContainerStarted","Data":"6c679bd30f4d63e28c9f5efb1191aee1cfcf44b044c04a9f86e4c4004ff8daa3"} Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.649349 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.650402 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" event={"ID":"a6e39cdc-941f-40e3-a6b8-d4ce7498ec1d","Type":"ContainerStarted","Data":"f030f80eb301fc65aadf9376acbe5068be7d1ddb4a6b922b9092df88d66449f5"} Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.650912 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.652497 5043 generic.go:334] "Generic (PLEG): container finished" podID="75d56d2d-27c2-4a6d-9f9f-3975af3a6bed" containerID="99689f430db2e07168d1ebbc5ad0c4ddfe5813980044c096e58269df42fb455b" exitCode=0 Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.652589 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" event={"ID":"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed","Type":"ContainerDied","Data":"99689f430db2e07168d1ebbc5ad0c4ddfe5813980044c096e58269df42fb455b"} Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.654424 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-lbz4p_b18ece39-f2f5-41f9-b2e1-79f9f880791b/console/0.log" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.654482 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lbz4p" event={"ID":"b18ece39-f2f5-41f9-b2e1-79f9f880791b","Type":"ContainerDied","Data":"607355d516227667d8171f6cf8612dc83a3e65d39d798a730178de8c3f9e0e4a"} Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.654521 5043 scope.go:117] "RemoveContainer" containerID="33164da69808a1ed3c9ab89d900bcf6b7cd2e1640f5a2602f4f4c9cc2eaae19a" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.654540 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lbz4p" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.657165 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.658599 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.669541 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5df574ddf5-t6rkq" podStartSLOduration=4.669526913 podStartE2EDuration="4.669526913s" podCreationTimestamp="2025-11-25 07:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:28:18.667499979 +0000 UTC m=+762.835695700" watchObservedRunningTime="2025-11-25 07:28:18.669526913 +0000 UTC m=+762.837722634" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.689697 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8f9db6b8c-x4cbf" podStartSLOduration=4.689679022 podStartE2EDuration="4.689679022s" podCreationTimestamp="2025-11-25 07:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:28:18.687449983 +0000 UTC m=+762.855645704" watchObservedRunningTime="2025-11-25 07:28:18.689679022 +0000 UTC m=+762.857874743" Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.747481 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-lbz4p"] Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.752501 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-lbz4p"] Nov 25 07:28:18 crc kubenswrapper[5043]: I1125 07:28:18.972878 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b18ece39-f2f5-41f9-b2e1-79f9f880791b" path="/var/lib/kubelet/pods/b18ece39-f2f5-41f9-b2e1-79f9f880791b/volumes" Nov 25 07:28:19 crc kubenswrapper[5043]: I1125 07:28:19.663633 5043 generic.go:334] "Generic (PLEG): container finished" podID="75d56d2d-27c2-4a6d-9f9f-3975af3a6bed" containerID="fbfb0a61685a9135abca225d87ac244bb12779dcd03a9fa0c70ec2ab6830fe82" exitCode=0 Nov 25 07:28:19 crc kubenswrapper[5043]: I1125 07:28:19.663686 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" event={"ID":"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed","Type":"ContainerDied","Data":"fbfb0a61685a9135abca225d87ac244bb12779dcd03a9fa0c70ec2ab6830fe82"} Nov 25 07:28:20 crc kubenswrapper[5043]: I1125 07:28:20.987554 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.060659 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h2mls"] Nov 25 07:28:21 crc kubenswrapper[5043]: E1125 07:28:21.060884 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d56d2d-27c2-4a6d-9f9f-3975af3a6bed" containerName="util" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.060918 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d56d2d-27c2-4a6d-9f9f-3975af3a6bed" containerName="util" Nov 25 07:28:21 crc kubenswrapper[5043]: E1125 07:28:21.060939 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d56d2d-27c2-4a6d-9f9f-3975af3a6bed" containerName="extract" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.060947 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d56d2d-27c2-4a6d-9f9f-3975af3a6bed" containerName="extract" Nov 25 07:28:21 crc kubenswrapper[5043]: E1125 07:28:21.060960 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d56d2d-27c2-4a6d-9f9f-3975af3a6bed" containerName="pull" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.060968 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d56d2d-27c2-4a6d-9f9f-3975af3a6bed" containerName="pull" Nov 25 07:28:21 crc kubenswrapper[5043]: E1125 07:28:21.060981 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18ece39-f2f5-41f9-b2e1-79f9f880791b" containerName="console" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.060988 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18ece39-f2f5-41f9-b2e1-79f9f880791b" containerName="console" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.061086 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="b18ece39-f2f5-41f9-b2e1-79f9f880791b" containerName="console" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.061101 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d56d2d-27c2-4a6d-9f9f-3975af3a6bed" containerName="extract" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.064133 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h2mls" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.066120 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-bundle\") pod \"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed\" (UID: \"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed\") " Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.066276 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx66m\" (UniqueName: \"kubernetes.io/projected/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-kube-api-access-sx66m\") pod \"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed\" (UID: \"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed\") " Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.066355 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-util\") pod \"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed\" (UID: \"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed\") " Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.068152 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-bundle" (OuterVolumeSpecName: "bundle") pod "75d56d2d-27c2-4a6d-9f9f-3975af3a6bed" (UID: "75d56d2d-27c2-4a6d-9f9f-3975af3a6bed"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.071791 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-kube-api-access-sx66m" (OuterVolumeSpecName: "kube-api-access-sx66m") pod "75d56d2d-27c2-4a6d-9f9f-3975af3a6bed" (UID: "75d56d2d-27c2-4a6d-9f9f-3975af3a6bed"). InnerVolumeSpecName "kube-api-access-sx66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.072728 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h2mls"] Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.077033 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-util" (OuterVolumeSpecName: "util") pod "75d56d2d-27c2-4a6d-9f9f-3975af3a6bed" (UID: "75d56d2d-27c2-4a6d-9f9f-3975af3a6bed"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.168161 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4204a4e5-fe33-4c02-aeba-e0ddd993e745-utilities\") pod \"redhat-operators-h2mls\" (UID: \"4204a4e5-fe33-4c02-aeba-e0ddd993e745\") " pod="openshift-marketplace/redhat-operators-h2mls" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.168463 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2jn9\" (UniqueName: \"kubernetes.io/projected/4204a4e5-fe33-4c02-aeba-e0ddd993e745-kube-api-access-g2jn9\") pod \"redhat-operators-h2mls\" (UID: \"4204a4e5-fe33-4c02-aeba-e0ddd993e745\") " pod="openshift-marketplace/redhat-operators-h2mls" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.168623 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4204a4e5-fe33-4c02-aeba-e0ddd993e745-catalog-content\") pod \"redhat-operators-h2mls\" (UID: \"4204a4e5-fe33-4c02-aeba-e0ddd993e745\") " pod="openshift-marketplace/redhat-operators-h2mls" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.168743 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx66m\" (UniqueName: \"kubernetes.io/projected/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-kube-api-access-sx66m\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.168816 5043 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-util\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.168883 5043 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75d56d2d-27c2-4a6d-9f9f-3975af3a6bed-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.270483 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2jn9\" (UniqueName: \"kubernetes.io/projected/4204a4e5-fe33-4c02-aeba-e0ddd993e745-kube-api-access-g2jn9\") pod \"redhat-operators-h2mls\" (UID: \"4204a4e5-fe33-4c02-aeba-e0ddd993e745\") " pod="openshift-marketplace/redhat-operators-h2mls" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.270777 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4204a4e5-fe33-4c02-aeba-e0ddd993e745-catalog-content\") pod \"redhat-operators-h2mls\" (UID: \"4204a4e5-fe33-4c02-aeba-e0ddd993e745\") " pod="openshift-marketplace/redhat-operators-h2mls" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.270870 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4204a4e5-fe33-4c02-aeba-e0ddd993e745-utilities\") pod \"redhat-operators-h2mls\" (UID: \"4204a4e5-fe33-4c02-aeba-e0ddd993e745\") " pod="openshift-marketplace/redhat-operators-h2mls" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.271346 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4204a4e5-fe33-4c02-aeba-e0ddd993e745-utilities\") pod \"redhat-operators-h2mls\" (UID: \"4204a4e5-fe33-4c02-aeba-e0ddd993e745\") " pod="openshift-marketplace/redhat-operators-h2mls" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.271450 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4204a4e5-fe33-4c02-aeba-e0ddd993e745-catalog-content\") pod \"redhat-operators-h2mls\" (UID: \"4204a4e5-fe33-4c02-aeba-e0ddd993e745\") " pod="openshift-marketplace/redhat-operators-h2mls" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.297969 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2jn9\" (UniqueName: \"kubernetes.io/projected/4204a4e5-fe33-4c02-aeba-e0ddd993e745-kube-api-access-g2jn9\") pod \"redhat-operators-h2mls\" (UID: \"4204a4e5-fe33-4c02-aeba-e0ddd993e745\") " pod="openshift-marketplace/redhat-operators-h2mls" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.398563 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h2mls" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.680316 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" event={"ID":"75d56d2d-27c2-4a6d-9f9f-3975af3a6bed","Type":"ContainerDied","Data":"63d2756d0918c6c1f349451eed134bda43002e697678b8124d3f6caff7d2ebb3"} Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.680353 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63d2756d0918c6c1f349451eed134bda43002e697678b8124d3f6caff7d2ebb3" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.680408 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6" Nov 25 07:28:21 crc kubenswrapper[5043]: I1125 07:28:21.829116 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h2mls"] Nov 25 07:28:21 crc kubenswrapper[5043]: W1125 07:28:21.835354 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4204a4e5_fe33_4c02_aeba_e0ddd993e745.slice/crio-9d6e067a04dbafd3411e8cd103324dc2e1a3957425f0f17e1200e2a337c9a0e5 WatchSource:0}: Error finding container 9d6e067a04dbafd3411e8cd103324dc2e1a3957425f0f17e1200e2a337c9a0e5: Status 404 returned error can't find the container with id 9d6e067a04dbafd3411e8cd103324dc2e1a3957425f0f17e1200e2a337c9a0e5 Nov 25 07:28:22 crc kubenswrapper[5043]: I1125 07:28:22.688219 5043 generic.go:334] "Generic (PLEG): container finished" podID="4204a4e5-fe33-4c02-aeba-e0ddd993e745" containerID="1330e97a0e970638bfccc849c5af4d3e692d987c96fba7aebfa458f9e387b1ed" exitCode=0 Nov 25 07:28:22 crc kubenswrapper[5043]: I1125 07:28:22.688348 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2mls" event={"ID":"4204a4e5-fe33-4c02-aeba-e0ddd993e745","Type":"ContainerDied","Data":"1330e97a0e970638bfccc849c5af4d3e692d987c96fba7aebfa458f9e387b1ed"} Nov 25 07:28:22 crc kubenswrapper[5043]: I1125 07:28:22.688636 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2mls" event={"ID":"4204a4e5-fe33-4c02-aeba-e0ddd993e745","Type":"ContainerStarted","Data":"9d6e067a04dbafd3411e8cd103324dc2e1a3957425f0f17e1200e2a337c9a0e5"} Nov 25 07:28:23 crc kubenswrapper[5043]: I1125 07:28:23.304411 5043 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 07:28:24 crc kubenswrapper[5043]: I1125 07:28:24.700739 5043 generic.go:334] "Generic (PLEG): container finished" podID="4204a4e5-fe33-4c02-aeba-e0ddd993e745" containerID="573840f29bf3b6a3a3887f664691179120f1acbae72f77f4e9d84d7edb734b96" exitCode=0 Nov 25 07:28:24 crc kubenswrapper[5043]: I1125 07:28:24.700780 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2mls" event={"ID":"4204a4e5-fe33-4c02-aeba-e0ddd993e745","Type":"ContainerDied","Data":"573840f29bf3b6a3a3887f664691179120f1acbae72f77f4e9d84d7edb734b96"} Nov 25 07:28:25 crc kubenswrapper[5043]: I1125 07:28:25.708723 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2mls" event={"ID":"4204a4e5-fe33-4c02-aeba-e0ddd993e745","Type":"ContainerStarted","Data":"9c6995639fab2302f6978bb1b734573846421575bef4d0f7532de301d37a661c"} Nov 25 07:28:25 crc kubenswrapper[5043]: I1125 07:28:25.732836 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h2mls" podStartSLOduration=2.329920952 podStartE2EDuration="4.732810059s" podCreationTimestamp="2025-11-25 07:28:21 +0000 UTC" firstStartedPulling="2025-11-25 07:28:22.69031863 +0000 UTC m=+766.858514351" lastFinishedPulling="2025-11-25 07:28:25.093207737 +0000 UTC m=+769.261403458" observedRunningTime="2025-11-25 07:28:25.727959938 +0000 UTC m=+769.896155659" watchObservedRunningTime="2025-11-25 07:28:25.732810059 +0000 UTC m=+769.901005780" Nov 25 07:28:28 crc kubenswrapper[5043]: I1125 07:28:28.761014 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr"] Nov 25 07:28:28 crc kubenswrapper[5043]: I1125 07:28:28.762199 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 07:28:28 crc kubenswrapper[5043]: I1125 07:28:28.764571 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 25 07:28:28 crc kubenswrapper[5043]: I1125 07:28:28.764654 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 07:28:28 crc kubenswrapper[5043]: I1125 07:28:28.764764 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 07:28:28 crc kubenswrapper[5043]: I1125 07:28:28.765522 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 07:28:28 crc kubenswrapper[5043]: I1125 07:28:28.766524 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-sn6cm" Nov 25 07:28:28 crc kubenswrapper[5043]: I1125 07:28:28.790693 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr"] Nov 25 07:28:28 crc kubenswrapper[5043]: I1125 07:28:28.861922 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v6vl\" (UniqueName: \"kubernetes.io/projected/cdbab2e0-494c-4845-a500-88b26934f1c7-kube-api-access-2v6vl\") pod \"metallb-operator-controller-manager-85bdd6cc97-lrkkr\" (UID: \"cdbab2e0-494c-4845-a500-88b26934f1c7\") " pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 07:28:28 crc kubenswrapper[5043]: I1125 07:28:28.861992 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cdbab2e0-494c-4845-a500-88b26934f1c7-apiservice-cert\") pod \"metallb-operator-controller-manager-85bdd6cc97-lrkkr\" (UID: \"cdbab2e0-494c-4845-a500-88b26934f1c7\") " pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 07:28:28 crc kubenswrapper[5043]: I1125 07:28:28.862035 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cdbab2e0-494c-4845-a500-88b26934f1c7-webhook-cert\") pod \"metallb-operator-controller-manager-85bdd6cc97-lrkkr\" (UID: \"cdbab2e0-494c-4845-a500-88b26934f1c7\") " pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 07:28:28 crc kubenswrapper[5043]: I1125 07:28:28.962884 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v6vl\" (UniqueName: \"kubernetes.io/projected/cdbab2e0-494c-4845-a500-88b26934f1c7-kube-api-access-2v6vl\") pod \"metallb-operator-controller-manager-85bdd6cc97-lrkkr\" (UID: \"cdbab2e0-494c-4845-a500-88b26934f1c7\") " pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 07:28:28 crc kubenswrapper[5043]: I1125 07:28:28.962944 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cdbab2e0-494c-4845-a500-88b26934f1c7-apiservice-cert\") pod \"metallb-operator-controller-manager-85bdd6cc97-lrkkr\" (UID: \"cdbab2e0-494c-4845-a500-88b26934f1c7\") " pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 07:28:28 crc kubenswrapper[5043]: I1125 07:28:28.962975 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cdbab2e0-494c-4845-a500-88b26934f1c7-webhook-cert\") pod \"metallb-operator-controller-manager-85bdd6cc97-lrkkr\" (UID: \"cdbab2e0-494c-4845-a500-88b26934f1c7\") " pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 07:28:28 crc kubenswrapper[5043]: I1125 07:28:28.969134 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cdbab2e0-494c-4845-a500-88b26934f1c7-webhook-cert\") pod \"metallb-operator-controller-manager-85bdd6cc97-lrkkr\" (UID: \"cdbab2e0-494c-4845-a500-88b26934f1c7\") " pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 07:28:28 crc kubenswrapper[5043]: I1125 07:28:28.972632 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cdbab2e0-494c-4845-a500-88b26934f1c7-apiservice-cert\") pod \"metallb-operator-controller-manager-85bdd6cc97-lrkkr\" (UID: \"cdbab2e0-494c-4845-a500-88b26934f1c7\") " pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 07:28:28 crc kubenswrapper[5043]: I1125 07:28:28.995054 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v6vl\" (UniqueName: \"kubernetes.io/projected/cdbab2e0-494c-4845-a500-88b26934f1c7-kube-api-access-2v6vl\") pod \"metallb-operator-controller-manager-85bdd6cc97-lrkkr\" (UID: \"cdbab2e0-494c-4845-a500-88b26934f1c7\") " pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.076387 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.136618 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-687d746769-dbszt"] Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.137481 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-687d746769-dbszt" Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.140765 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.140975 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.141449 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-8bdmn" Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.159904 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-687d746769-dbszt"] Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.165649 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72fzd\" (UniqueName: \"kubernetes.io/projected/d592d149-d73b-4db0-a83f-81fdb776420a-kube-api-access-72fzd\") pod \"metallb-operator-webhook-server-687d746769-dbszt\" (UID: \"d592d149-d73b-4db0-a83f-81fdb776420a\") " pod="metallb-system/metallb-operator-webhook-server-687d746769-dbszt" Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.165722 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d592d149-d73b-4db0-a83f-81fdb776420a-webhook-cert\") pod \"metallb-operator-webhook-server-687d746769-dbszt\" (UID: \"d592d149-d73b-4db0-a83f-81fdb776420a\") " pod="metallb-system/metallb-operator-webhook-server-687d746769-dbszt" Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.165759 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d592d149-d73b-4db0-a83f-81fdb776420a-apiservice-cert\") pod \"metallb-operator-webhook-server-687d746769-dbszt\" (UID: \"d592d149-d73b-4db0-a83f-81fdb776420a\") " pod="metallb-system/metallb-operator-webhook-server-687d746769-dbszt" Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.267324 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72fzd\" (UniqueName: \"kubernetes.io/projected/d592d149-d73b-4db0-a83f-81fdb776420a-kube-api-access-72fzd\") pod \"metallb-operator-webhook-server-687d746769-dbszt\" (UID: \"d592d149-d73b-4db0-a83f-81fdb776420a\") " pod="metallb-system/metallb-operator-webhook-server-687d746769-dbszt" Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.267620 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d592d149-d73b-4db0-a83f-81fdb776420a-webhook-cert\") pod \"metallb-operator-webhook-server-687d746769-dbszt\" (UID: \"d592d149-d73b-4db0-a83f-81fdb776420a\") " pod="metallb-system/metallb-operator-webhook-server-687d746769-dbszt" Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.267649 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d592d149-d73b-4db0-a83f-81fdb776420a-apiservice-cert\") pod \"metallb-operator-webhook-server-687d746769-dbszt\" (UID: \"d592d149-d73b-4db0-a83f-81fdb776420a\") " pod="metallb-system/metallb-operator-webhook-server-687d746769-dbszt" Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.271069 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d592d149-d73b-4db0-a83f-81fdb776420a-apiservice-cert\") pod \"metallb-operator-webhook-server-687d746769-dbszt\" (UID: \"d592d149-d73b-4db0-a83f-81fdb776420a\") " pod="metallb-system/metallb-operator-webhook-server-687d746769-dbszt" Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.271352 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d592d149-d73b-4db0-a83f-81fdb776420a-webhook-cert\") pod \"metallb-operator-webhook-server-687d746769-dbszt\" (UID: \"d592d149-d73b-4db0-a83f-81fdb776420a\") " pod="metallb-system/metallb-operator-webhook-server-687d746769-dbszt" Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.282918 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72fzd\" (UniqueName: \"kubernetes.io/projected/d592d149-d73b-4db0-a83f-81fdb776420a-kube-api-access-72fzd\") pod \"metallb-operator-webhook-server-687d746769-dbszt\" (UID: \"d592d149-d73b-4db0-a83f-81fdb776420a\") " pod="metallb-system/metallb-operator-webhook-server-687d746769-dbszt" Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.458279 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-687d746769-dbszt" Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.546717 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr"] Nov 25 07:28:29 crc kubenswrapper[5043]: W1125 07:28:29.557673 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdbab2e0_494c_4845_a500_88b26934f1c7.slice/crio-0525455d9388c952388739f4d36ca8ac744a0d9f3f380e311fb316c81aaa8654 WatchSource:0}: Error finding container 0525455d9388c952388739f4d36ca8ac744a0d9f3f380e311fb316c81aaa8654: Status 404 returned error can't find the container with id 0525455d9388c952388739f4d36ca8ac744a0d9f3f380e311fb316c81aaa8654 Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.730382 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" event={"ID":"cdbab2e0-494c-4845-a500-88b26934f1c7","Type":"ContainerStarted","Data":"0525455d9388c952388739f4d36ca8ac744a0d9f3f380e311fb316c81aaa8654"} Nov 25 07:28:29 crc kubenswrapper[5043]: I1125 07:28:29.895015 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-687d746769-dbszt"] Nov 25 07:28:29 crc kubenswrapper[5043]: W1125 07:28:29.904820 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd592d149_d73b_4db0_a83f_81fdb776420a.slice/crio-c299b5a9c531d24634dbc010eccacb225bcce692c1556cf150d9fb3b59f70087 WatchSource:0}: Error finding container c299b5a9c531d24634dbc010eccacb225bcce692c1556cf150d9fb3b59f70087: Status 404 returned error can't find the container with id c299b5a9c531d24634dbc010eccacb225bcce692c1556cf150d9fb3b59f70087 Nov 25 07:28:30 crc kubenswrapper[5043]: I1125 07:28:30.736050 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-687d746769-dbszt" event={"ID":"d592d149-d73b-4db0-a83f-81fdb776420a","Type":"ContainerStarted","Data":"c299b5a9c531d24634dbc010eccacb225bcce692c1556cf150d9fb3b59f70087"} Nov 25 07:28:31 crc kubenswrapper[5043]: I1125 07:28:31.399310 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h2mls" Nov 25 07:28:31 crc kubenswrapper[5043]: I1125 07:28:31.399387 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h2mls" Nov 25 07:28:31 crc kubenswrapper[5043]: I1125 07:28:31.480428 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h2mls" Nov 25 07:28:31 crc kubenswrapper[5043]: I1125 07:28:31.791353 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h2mls" Nov 25 07:28:32 crc kubenswrapper[5043]: I1125 07:28:32.251187 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h2mls"] Nov 25 07:28:33 crc kubenswrapper[5043]: I1125 07:28:33.756118 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h2mls" podUID="4204a4e5-fe33-4c02-aeba-e0ddd993e745" containerName="registry-server" containerID="cri-o://9c6995639fab2302f6978bb1b734573846421575bef4d0f7532de301d37a661c" gracePeriod=2 Nov 25 07:28:34 crc kubenswrapper[5043]: I1125 07:28:34.776297 5043 generic.go:334] "Generic (PLEG): container finished" podID="4204a4e5-fe33-4c02-aeba-e0ddd993e745" containerID="9c6995639fab2302f6978bb1b734573846421575bef4d0f7532de301d37a661c" exitCode=0 Nov 25 07:28:34 crc kubenswrapper[5043]: I1125 07:28:34.776470 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2mls" event={"ID":"4204a4e5-fe33-4c02-aeba-e0ddd993e745","Type":"ContainerDied","Data":"9c6995639fab2302f6978bb1b734573846421575bef4d0f7532de301d37a661c"} Nov 25 07:28:34 crc kubenswrapper[5043]: I1125 07:28:34.989912 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h2mls" Nov 25 07:28:35 crc kubenswrapper[5043]: I1125 07:28:35.145458 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4204a4e5-fe33-4c02-aeba-e0ddd993e745-utilities\") pod \"4204a4e5-fe33-4c02-aeba-e0ddd993e745\" (UID: \"4204a4e5-fe33-4c02-aeba-e0ddd993e745\") " Nov 25 07:28:35 crc kubenswrapper[5043]: I1125 07:28:35.145506 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2jn9\" (UniqueName: \"kubernetes.io/projected/4204a4e5-fe33-4c02-aeba-e0ddd993e745-kube-api-access-g2jn9\") pod \"4204a4e5-fe33-4c02-aeba-e0ddd993e745\" (UID: \"4204a4e5-fe33-4c02-aeba-e0ddd993e745\") " Nov 25 07:28:35 crc kubenswrapper[5043]: I1125 07:28:35.145552 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4204a4e5-fe33-4c02-aeba-e0ddd993e745-catalog-content\") pod \"4204a4e5-fe33-4c02-aeba-e0ddd993e745\" (UID: \"4204a4e5-fe33-4c02-aeba-e0ddd993e745\") " Nov 25 07:28:35 crc kubenswrapper[5043]: I1125 07:28:35.146465 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4204a4e5-fe33-4c02-aeba-e0ddd993e745-utilities" (OuterVolumeSpecName: "utilities") pod "4204a4e5-fe33-4c02-aeba-e0ddd993e745" (UID: "4204a4e5-fe33-4c02-aeba-e0ddd993e745"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:28:35 crc kubenswrapper[5043]: I1125 07:28:35.146777 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4204a4e5-fe33-4c02-aeba-e0ddd993e745-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:35 crc kubenswrapper[5043]: I1125 07:28:35.155542 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4204a4e5-fe33-4c02-aeba-e0ddd993e745-kube-api-access-g2jn9" (OuterVolumeSpecName: "kube-api-access-g2jn9") pod "4204a4e5-fe33-4c02-aeba-e0ddd993e745" (UID: "4204a4e5-fe33-4c02-aeba-e0ddd993e745"). InnerVolumeSpecName "kube-api-access-g2jn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:28:35 crc kubenswrapper[5043]: I1125 07:28:35.245036 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4204a4e5-fe33-4c02-aeba-e0ddd993e745-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4204a4e5-fe33-4c02-aeba-e0ddd993e745" (UID: "4204a4e5-fe33-4c02-aeba-e0ddd993e745"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:28:35 crc kubenswrapper[5043]: I1125 07:28:35.248201 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2jn9\" (UniqueName: \"kubernetes.io/projected/4204a4e5-fe33-4c02-aeba-e0ddd993e745-kube-api-access-g2jn9\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:35 crc kubenswrapper[5043]: I1125 07:28:35.248227 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4204a4e5-fe33-4c02-aeba-e0ddd993e745-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:28:35 crc kubenswrapper[5043]: I1125 07:28:35.784201 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h2mls" event={"ID":"4204a4e5-fe33-4c02-aeba-e0ddd993e745","Type":"ContainerDied","Data":"9d6e067a04dbafd3411e8cd103324dc2e1a3957425f0f17e1200e2a337c9a0e5"} Nov 25 07:28:35 crc kubenswrapper[5043]: I1125 07:28:35.784425 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h2mls" Nov 25 07:28:35 crc kubenswrapper[5043]: I1125 07:28:35.784462 5043 scope.go:117] "RemoveContainer" containerID="9c6995639fab2302f6978bb1b734573846421575bef4d0f7532de301d37a661c" Nov 25 07:28:35 crc kubenswrapper[5043]: I1125 07:28:35.812713 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h2mls"] Nov 25 07:28:35 crc kubenswrapper[5043]: I1125 07:28:35.815510 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h2mls"] Nov 25 07:28:36 crc kubenswrapper[5043]: I1125 07:28:36.971358 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4204a4e5-fe33-4c02-aeba-e0ddd993e745" path="/var/lib/kubelet/pods/4204a4e5-fe33-4c02-aeba-e0ddd993e745/volumes" Nov 25 07:28:38 crc kubenswrapper[5043]: I1125 07:28:38.880437 5043 scope.go:117] "RemoveContainer" containerID="573840f29bf3b6a3a3887f664691179120f1acbae72f77f4e9d84d7edb734b96" Nov 25 07:28:38 crc kubenswrapper[5043]: I1125 07:28:38.958376 5043 scope.go:117] "RemoveContainer" containerID="1330e97a0e970638bfccc849c5af4d3e692d987c96fba7aebfa458f9e387b1ed" Nov 25 07:28:39 crc kubenswrapper[5043]: I1125 07:28:39.804699 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-687d746769-dbszt" event={"ID":"d592d149-d73b-4db0-a83f-81fdb776420a","Type":"ContainerStarted","Data":"81d2d2102a458855af2f8f6e7751e9dcf307e52f45f55e17a639e5af7052af0c"} Nov 25 07:28:39 crc kubenswrapper[5043]: I1125 07:28:39.804762 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-687d746769-dbszt" Nov 25 07:28:39 crc kubenswrapper[5043]: I1125 07:28:39.809153 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" event={"ID":"cdbab2e0-494c-4845-a500-88b26934f1c7","Type":"ContainerStarted","Data":"f87486126cd112c0054b9155c69bde370467d4d74ee81db0a0ba7dd0605b790a"} Nov 25 07:28:39 crc kubenswrapper[5043]: I1125 07:28:39.809307 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 07:28:39 crc kubenswrapper[5043]: I1125 07:28:39.827717 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-687d746769-dbszt" podStartSLOduration=1.7400070909999998 podStartE2EDuration="10.827701941s" podCreationTimestamp="2025-11-25 07:28:29 +0000 UTC" firstStartedPulling="2025-11-25 07:28:29.907770327 +0000 UTC m=+774.075966048" lastFinishedPulling="2025-11-25 07:28:38.995465167 +0000 UTC m=+783.163660898" observedRunningTime="2025-11-25 07:28:39.824598358 +0000 UTC m=+783.992794079" watchObservedRunningTime="2025-11-25 07:28:39.827701941 +0000 UTC m=+783.995897662" Nov 25 07:28:39 crc kubenswrapper[5043]: I1125 07:28:39.846687 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" podStartSLOduration=2.45040984 podStartE2EDuration="11.84667324s" podCreationTimestamp="2025-11-25 07:28:28 +0000 UTC" firstStartedPulling="2025-11-25 07:28:29.562267408 +0000 UTC m=+773.730463129" lastFinishedPulling="2025-11-25 07:28:38.958530808 +0000 UTC m=+783.126726529" observedRunningTime="2025-11-25 07:28:39.845478558 +0000 UTC m=+784.013674279" watchObservedRunningTime="2025-11-25 07:28:39.84667324 +0000 UTC m=+784.014868961" Nov 25 07:28:47 crc kubenswrapper[5043]: I1125 07:28:47.275895 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:28:47 crc kubenswrapper[5043]: I1125 07:28:47.276564 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:28:49 crc kubenswrapper[5043]: I1125 07:28:49.470299 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-687d746769-dbszt" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.080534 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.857718 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-4mwjl"] Nov 25 07:29:09 crc kubenswrapper[5043]: E1125 07:29:09.857983 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4204a4e5-fe33-4c02-aeba-e0ddd993e745" containerName="registry-server" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.858003 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4204a4e5-fe33-4c02-aeba-e0ddd993e745" containerName="registry-server" Nov 25 07:29:09 crc kubenswrapper[5043]: E1125 07:29:09.858016 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4204a4e5-fe33-4c02-aeba-e0ddd993e745" containerName="extract-utilities" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.858023 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4204a4e5-fe33-4c02-aeba-e0ddd993e745" containerName="extract-utilities" Nov 25 07:29:09 crc kubenswrapper[5043]: E1125 07:29:09.858045 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4204a4e5-fe33-4c02-aeba-e0ddd993e745" containerName="extract-content" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.858054 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4204a4e5-fe33-4c02-aeba-e0ddd993e745" containerName="extract-content" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.858167 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4204a4e5-fe33-4c02-aeba-e0ddd993e745" containerName="registry-server" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.858506 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mwjl" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.871743 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-tdt6k"] Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.871791 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-qkf5c" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.871794 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.874492 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.877699 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.878667 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.888408 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-4mwjl"] Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.978620 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d08af94-ced7-41f2-a5da-4a5ab09436bb-cert\") pod \"frr-k8s-webhook-server-6998585d5-4mwjl\" (UID: \"4d08af94-ced7-41f2-a5da-4a5ab09436bb\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mwjl" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.978696 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kjjm\" (UniqueName: \"kubernetes.io/projected/781aa9bd-6e71-452c-8932-758f4c26cb40-kube-api-access-7kjjm\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.978724 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/781aa9bd-6e71-452c-8932-758f4c26cb40-metrics-certs\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.978889 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/781aa9bd-6e71-452c-8932-758f4c26cb40-reloader\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.978986 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxzrm\" (UniqueName: \"kubernetes.io/projected/4d08af94-ced7-41f2-a5da-4a5ab09436bb-kube-api-access-dxzrm\") pod \"frr-k8s-webhook-server-6998585d5-4mwjl\" (UID: \"4d08af94-ced7-41f2-a5da-4a5ab09436bb\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mwjl" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.979037 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/781aa9bd-6e71-452c-8932-758f4c26cb40-frr-startup\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.979059 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/781aa9bd-6e71-452c-8932-758f4c26cb40-frr-sockets\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.979089 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/781aa9bd-6e71-452c-8932-758f4c26cb40-metrics\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.979135 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/781aa9bd-6e71-452c-8932-758f4c26cb40-frr-conf\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.986400 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-8sqcm"] Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.987328 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8sqcm" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.991049 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-5p6f4" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.991249 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.991353 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.991461 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 07:29:09 crc kubenswrapper[5043]: I1125 07:29:09.999186 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-fkqdz"] Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.000034 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-fkqdz" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.003755 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.013962 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-fkqdz"] Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.080703 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d08af94-ced7-41f2-a5da-4a5ab09436bb-cert\") pod \"frr-k8s-webhook-server-6998585d5-4mwjl\" (UID: \"4d08af94-ced7-41f2-a5da-4a5ab09436bb\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mwjl" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.080791 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e214977c-6456-4990-b061-b88f5a127836-cert\") pod \"controller-6c7b4b5f48-fkqdz\" (UID: \"e214977c-6456-4990-b061-b88f5a127836\") " pod="metallb-system/controller-6c7b4b5f48-fkqdz" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.080851 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kjjm\" (UniqueName: \"kubernetes.io/projected/781aa9bd-6e71-452c-8932-758f4c26cb40-kube-api-access-7kjjm\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.080888 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nfct\" (UniqueName: \"kubernetes.io/projected/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-kube-api-access-5nfct\") pod \"speaker-8sqcm\" (UID: \"f6fde8c1-7722-4081-ae09-6f0cf5af35c4\") " pod="metallb-system/speaker-8sqcm" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.080928 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/781aa9bd-6e71-452c-8932-758f4c26cb40-metrics-certs\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.080994 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/781aa9bd-6e71-452c-8932-758f4c26cb40-reloader\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.081037 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h85rc\" (UniqueName: \"kubernetes.io/projected/e214977c-6456-4990-b061-b88f5a127836-kube-api-access-h85rc\") pod \"controller-6c7b4b5f48-fkqdz\" (UID: \"e214977c-6456-4990-b061-b88f5a127836\") " pod="metallb-system/controller-6c7b4b5f48-fkqdz" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.081085 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-metallb-excludel2\") pod \"speaker-8sqcm\" (UID: \"f6fde8c1-7722-4081-ae09-6f0cf5af35c4\") " pod="metallb-system/speaker-8sqcm" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.081132 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxzrm\" (UniqueName: \"kubernetes.io/projected/4d08af94-ced7-41f2-a5da-4a5ab09436bb-kube-api-access-dxzrm\") pod \"frr-k8s-webhook-server-6998585d5-4mwjl\" (UID: \"4d08af94-ced7-41f2-a5da-4a5ab09436bb\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mwjl" Nov 25 07:29:10 crc kubenswrapper[5043]: E1125 07:29:10.081137 5043 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 25 07:29:10 crc kubenswrapper[5043]: E1125 07:29:10.081325 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/781aa9bd-6e71-452c-8932-758f4c26cb40-metrics-certs podName:781aa9bd-6e71-452c-8932-758f4c26cb40 nodeName:}" failed. No retries permitted until 2025-11-25 07:29:10.581287925 +0000 UTC m=+814.749483646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/781aa9bd-6e71-452c-8932-758f4c26cb40-metrics-certs") pod "frr-k8s-tdt6k" (UID: "781aa9bd-6e71-452c-8932-758f4c26cb40") : secret "frr-k8s-certs-secret" not found Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.081386 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/781aa9bd-6e71-452c-8932-758f4c26cb40-frr-startup\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.081435 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/781aa9bd-6e71-452c-8932-758f4c26cb40-frr-sockets\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.081460 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/781aa9bd-6e71-452c-8932-758f4c26cb40-reloader\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.081484 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/781aa9bd-6e71-452c-8932-758f4c26cb40-metrics\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.081528 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e214977c-6456-4990-b061-b88f5a127836-metrics-certs\") pod \"controller-6c7b4b5f48-fkqdz\" (UID: \"e214977c-6456-4990-b061-b88f5a127836\") " pod="metallb-system/controller-6c7b4b5f48-fkqdz" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.081568 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-metrics-certs\") pod \"speaker-8sqcm\" (UID: \"f6fde8c1-7722-4081-ae09-6f0cf5af35c4\") " pod="metallb-system/speaker-8sqcm" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.081585 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-memberlist\") pod \"speaker-8sqcm\" (UID: \"f6fde8c1-7722-4081-ae09-6f0cf5af35c4\") " pod="metallb-system/speaker-8sqcm" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.081659 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/781aa9bd-6e71-452c-8932-758f4c26cb40-frr-conf\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.081809 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/781aa9bd-6e71-452c-8932-758f4c26cb40-frr-sockets\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.081931 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/781aa9bd-6e71-452c-8932-758f4c26cb40-metrics\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.082033 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/781aa9bd-6e71-452c-8932-758f4c26cb40-frr-conf\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.082184 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/781aa9bd-6e71-452c-8932-758f4c26cb40-frr-startup\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.088106 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d08af94-ced7-41f2-a5da-4a5ab09436bb-cert\") pod \"frr-k8s-webhook-server-6998585d5-4mwjl\" (UID: \"4d08af94-ced7-41f2-a5da-4a5ab09436bb\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mwjl" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.104784 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kjjm\" (UniqueName: \"kubernetes.io/projected/781aa9bd-6e71-452c-8932-758f4c26cb40-kube-api-access-7kjjm\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.105345 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxzrm\" (UniqueName: \"kubernetes.io/projected/4d08af94-ced7-41f2-a5da-4a5ab09436bb-kube-api-access-dxzrm\") pod \"frr-k8s-webhook-server-6998585d5-4mwjl\" (UID: \"4d08af94-ced7-41f2-a5da-4a5ab09436bb\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mwjl" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.174405 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mwjl" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.183105 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h85rc\" (UniqueName: \"kubernetes.io/projected/e214977c-6456-4990-b061-b88f5a127836-kube-api-access-h85rc\") pod \"controller-6c7b4b5f48-fkqdz\" (UID: \"e214977c-6456-4990-b061-b88f5a127836\") " pod="metallb-system/controller-6c7b4b5f48-fkqdz" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.183483 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-metallb-excludel2\") pod \"speaker-8sqcm\" (UID: \"f6fde8c1-7722-4081-ae09-6f0cf5af35c4\") " pod="metallb-system/speaker-8sqcm" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.183554 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e214977c-6456-4990-b061-b88f5a127836-metrics-certs\") pod \"controller-6c7b4b5f48-fkqdz\" (UID: \"e214977c-6456-4990-b061-b88f5a127836\") " pod="metallb-system/controller-6c7b4b5f48-fkqdz" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.183589 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-metrics-certs\") pod \"speaker-8sqcm\" (UID: \"f6fde8c1-7722-4081-ae09-6f0cf5af35c4\") " pod="metallb-system/speaker-8sqcm" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.183636 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-memberlist\") pod \"speaker-8sqcm\" (UID: \"f6fde8c1-7722-4081-ae09-6f0cf5af35c4\") " pod="metallb-system/speaker-8sqcm" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.183673 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e214977c-6456-4990-b061-b88f5a127836-cert\") pod \"controller-6c7b4b5f48-fkqdz\" (UID: \"e214977c-6456-4990-b061-b88f5a127836\") " pod="metallb-system/controller-6c7b4b5f48-fkqdz" Nov 25 07:29:10 crc kubenswrapper[5043]: E1125 07:29:10.183831 5043 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 07:29:10 crc kubenswrapper[5043]: E1125 07:29:10.183911 5043 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.184122 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nfct\" (UniqueName: \"kubernetes.io/projected/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-kube-api-access-5nfct\") pod \"speaker-8sqcm\" (UID: \"f6fde8c1-7722-4081-ae09-6f0cf5af35c4\") " pod="metallb-system/speaker-8sqcm" Nov 25 07:29:10 crc kubenswrapper[5043]: E1125 07:29:10.184240 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-memberlist podName:f6fde8c1-7722-4081-ae09-6f0cf5af35c4 nodeName:}" failed. No retries permitted until 2025-11-25 07:29:10.684216893 +0000 UTC m=+814.852412614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-memberlist") pod "speaker-8sqcm" (UID: "f6fde8c1-7722-4081-ae09-6f0cf5af35c4") : secret "metallb-memberlist" not found Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.184250 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-metallb-excludel2\") pod \"speaker-8sqcm\" (UID: \"f6fde8c1-7722-4081-ae09-6f0cf5af35c4\") " pod="metallb-system/speaker-8sqcm" Nov 25 07:29:10 crc kubenswrapper[5043]: E1125 07:29:10.184269 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-metrics-certs podName:f6fde8c1-7722-4081-ae09-6f0cf5af35c4 nodeName:}" failed. No retries permitted until 2025-11-25 07:29:10.684257914 +0000 UTC m=+814.852453635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-metrics-certs") pod "speaker-8sqcm" (UID: "f6fde8c1-7722-4081-ae09-6f0cf5af35c4") : secret "speaker-certs-secret" not found Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.188358 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.188418 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e214977c-6456-4990-b061-b88f5a127836-metrics-certs\") pod \"controller-6c7b4b5f48-fkqdz\" (UID: \"e214977c-6456-4990-b061-b88f5a127836\") " pod="metallb-system/controller-6c7b4b5f48-fkqdz" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.198118 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e214977c-6456-4990-b061-b88f5a127836-cert\") pod \"controller-6c7b4b5f48-fkqdz\" (UID: \"e214977c-6456-4990-b061-b88f5a127836\") " pod="metallb-system/controller-6c7b4b5f48-fkqdz" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.208694 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nfct\" (UniqueName: \"kubernetes.io/projected/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-kube-api-access-5nfct\") pod \"speaker-8sqcm\" (UID: \"f6fde8c1-7722-4081-ae09-6f0cf5af35c4\") " pod="metallb-system/speaker-8sqcm" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.217034 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h85rc\" (UniqueName: \"kubernetes.io/projected/e214977c-6456-4990-b061-b88f5a127836-kube-api-access-h85rc\") pod \"controller-6c7b4b5f48-fkqdz\" (UID: \"e214977c-6456-4990-b061-b88f5a127836\") " pod="metallb-system/controller-6c7b4b5f48-fkqdz" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.315380 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-fkqdz" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.521729 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-fkqdz"] Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.589282 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/781aa9bd-6e71-452c-8932-758f4c26cb40-metrics-certs\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.595378 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/781aa9bd-6e71-452c-8932-758f4c26cb40-metrics-certs\") pod \"frr-k8s-tdt6k\" (UID: \"781aa9bd-6e71-452c-8932-758f4c26cb40\") " pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.600908 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-4mwjl"] Nov 25 07:29:10 crc kubenswrapper[5043]: W1125 07:29:10.623490 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d08af94_ced7_41f2_a5da_4a5ab09436bb.slice/crio-9a774e245c88e01d82a6132b37458ee78e52662cf418f5793f6851940639d23f WatchSource:0}: Error finding container 9a774e245c88e01d82a6132b37458ee78e52662cf418f5793f6851940639d23f: Status 404 returned error can't find the container with id 9a774e245c88e01d82a6132b37458ee78e52662cf418f5793f6851940639d23f Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.692105 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-metrics-certs\") pod \"speaker-8sqcm\" (UID: \"f6fde8c1-7722-4081-ae09-6f0cf5af35c4\") " pod="metallb-system/speaker-8sqcm" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.692182 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-memberlist\") pod \"speaker-8sqcm\" (UID: \"f6fde8c1-7722-4081-ae09-6f0cf5af35c4\") " pod="metallb-system/speaker-8sqcm" Nov 25 07:29:10 crc kubenswrapper[5043]: E1125 07:29:10.692334 5043 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 07:29:10 crc kubenswrapper[5043]: E1125 07:29:10.692449 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-memberlist podName:f6fde8c1-7722-4081-ae09-6f0cf5af35c4 nodeName:}" failed. No retries permitted until 2025-11-25 07:29:11.692417553 +0000 UTC m=+815.860613264 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-memberlist") pod "speaker-8sqcm" (UID: "f6fde8c1-7722-4081-ae09-6f0cf5af35c4") : secret "metallb-memberlist" not found Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.697197 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-metrics-certs\") pod \"speaker-8sqcm\" (UID: \"f6fde8c1-7722-4081-ae09-6f0cf5af35c4\") " pod="metallb-system/speaker-8sqcm" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.786369 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:10 crc kubenswrapper[5043]: I1125 07:29:10.998443 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mwjl" event={"ID":"4d08af94-ced7-41f2-a5da-4a5ab09436bb","Type":"ContainerStarted","Data":"9a774e245c88e01d82a6132b37458ee78e52662cf418f5793f6851940639d23f"} Nov 25 07:29:11 crc kubenswrapper[5043]: I1125 07:29:11.000356 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-fkqdz" event={"ID":"e214977c-6456-4990-b061-b88f5a127836","Type":"ContainerStarted","Data":"f1f8c8c2daf0151458b923613976da618e86e221e04a235e059d9ebfad59e594"} Nov 25 07:29:11 crc kubenswrapper[5043]: I1125 07:29:11.000409 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-fkqdz" event={"ID":"e214977c-6456-4990-b061-b88f5a127836","Type":"ContainerStarted","Data":"17fd266ee53387ffeb258bbb4d0a5561ec1b0e929b4371914c2fe22c34b17b2e"} Nov 25 07:29:11 crc kubenswrapper[5043]: I1125 07:29:11.000420 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-fkqdz" event={"ID":"e214977c-6456-4990-b061-b88f5a127836","Type":"ContainerStarted","Data":"5ccf9513af2497c99ded1027d89534aa629ee2b602067c18533c371feeb9943e"} Nov 25 07:29:11 crc kubenswrapper[5043]: I1125 07:29:11.000468 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-fkqdz" Nov 25 07:29:11 crc kubenswrapper[5043]: I1125 07:29:11.001529 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdt6k" event={"ID":"781aa9bd-6e71-452c-8932-758f4c26cb40","Type":"ContainerStarted","Data":"f7216207b99d07a6a8e614be0b3cea790bf7a6619546aee07c0640537d8fb665"} Nov 25 07:29:11 crc kubenswrapper[5043]: I1125 07:29:11.703086 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-memberlist\") pod \"speaker-8sqcm\" (UID: \"f6fde8c1-7722-4081-ae09-6f0cf5af35c4\") " pod="metallb-system/speaker-8sqcm" Nov 25 07:29:11 crc kubenswrapper[5043]: I1125 07:29:11.707731 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f6fde8c1-7722-4081-ae09-6f0cf5af35c4-memberlist\") pod \"speaker-8sqcm\" (UID: \"f6fde8c1-7722-4081-ae09-6f0cf5af35c4\") " pod="metallb-system/speaker-8sqcm" Nov 25 07:29:11 crc kubenswrapper[5043]: I1125 07:29:11.803566 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8sqcm" Nov 25 07:29:11 crc kubenswrapper[5043]: W1125 07:29:11.824493 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6fde8c1_7722_4081_ae09_6f0cf5af35c4.slice/crio-d2ebf2340f4b293caf566e0d05a73de77d2aa7be9fd328883b4fe20a22bdbc6e WatchSource:0}: Error finding container d2ebf2340f4b293caf566e0d05a73de77d2aa7be9fd328883b4fe20a22bdbc6e: Status 404 returned error can't find the container with id d2ebf2340f4b293caf566e0d05a73de77d2aa7be9fd328883b4fe20a22bdbc6e Nov 25 07:29:12 crc kubenswrapper[5043]: I1125 07:29:12.008484 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8sqcm" event={"ID":"f6fde8c1-7722-4081-ae09-6f0cf5af35c4","Type":"ContainerStarted","Data":"d2ebf2340f4b293caf566e0d05a73de77d2aa7be9fd328883b4fe20a22bdbc6e"} Nov 25 07:29:13 crc kubenswrapper[5043]: I1125 07:29:13.018585 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8sqcm" event={"ID":"f6fde8c1-7722-4081-ae09-6f0cf5af35c4","Type":"ContainerStarted","Data":"462898b49ccf0029845c29ad8eabb4bd14c36d3104a9f99bd04d416e7f51a6b0"} Nov 25 07:29:13 crc kubenswrapper[5043]: I1125 07:29:13.019039 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8sqcm" Nov 25 07:29:13 crc kubenswrapper[5043]: I1125 07:29:13.019053 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8sqcm" event={"ID":"f6fde8c1-7722-4081-ae09-6f0cf5af35c4","Type":"ContainerStarted","Data":"7ad3a2d46191ac0ac9930dd5000d59e2d0450ff738ccba48281fc9aab212917b"} Nov 25 07:29:13 crc kubenswrapper[5043]: I1125 07:29:13.036862 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-8sqcm" podStartSLOduration=4.036829464 podStartE2EDuration="4.036829464s" podCreationTimestamp="2025-11-25 07:29:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:29:13.034552903 +0000 UTC m=+817.202748614" watchObservedRunningTime="2025-11-25 07:29:13.036829464 +0000 UTC m=+817.205025185" Nov 25 07:29:13 crc kubenswrapper[5043]: I1125 07:29:13.042162 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-fkqdz" podStartSLOduration=4.042124426 podStartE2EDuration="4.042124426s" podCreationTimestamp="2025-11-25 07:29:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:29:11.015655916 +0000 UTC m=+815.183851667" watchObservedRunningTime="2025-11-25 07:29:13.042124426 +0000 UTC m=+817.210320147" Nov 25 07:29:17 crc kubenswrapper[5043]: I1125 07:29:17.277541 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:29:17 crc kubenswrapper[5043]: I1125 07:29:17.278194 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:29:17 crc kubenswrapper[5043]: I1125 07:29:17.278244 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:29:17 crc kubenswrapper[5043]: I1125 07:29:17.279118 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"775db4b9aa6c61b7085bc9862b445a04e41b9906b056014fba7881c8d0080c48"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:29:17 crc kubenswrapper[5043]: I1125 07:29:17.279225 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://775db4b9aa6c61b7085bc9862b445a04e41b9906b056014fba7881c8d0080c48" gracePeriod=600 Nov 25 07:29:18 crc kubenswrapper[5043]: I1125 07:29:18.063547 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="775db4b9aa6c61b7085bc9862b445a04e41b9906b056014fba7881c8d0080c48" exitCode=0 Nov 25 07:29:18 crc kubenswrapper[5043]: I1125 07:29:18.063666 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"775db4b9aa6c61b7085bc9862b445a04e41b9906b056014fba7881c8d0080c48"} Nov 25 07:29:18 crc kubenswrapper[5043]: I1125 07:29:18.064258 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"6f48da9589e1ae5ed9bf24bc242ace441c8f2ff30315a460e91bdc63d89d037f"} Nov 25 07:29:18 crc kubenswrapper[5043]: I1125 07:29:18.064285 5043 scope.go:117] "RemoveContainer" containerID="b1bc33196e4e3b55ff5393f391b973ffa4f1cd291219b6b6ac8f14aff8f26dd4" Nov 25 07:29:18 crc kubenswrapper[5043]: I1125 07:29:18.073239 5043 generic.go:334] "Generic (PLEG): container finished" podID="781aa9bd-6e71-452c-8932-758f4c26cb40" containerID="0856cf4eae53acc455277886edbb76c6488f5945e858da2980c91352c7413ada" exitCode=0 Nov 25 07:29:18 crc kubenswrapper[5043]: I1125 07:29:18.073310 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdt6k" event={"ID":"781aa9bd-6e71-452c-8932-758f4c26cb40","Type":"ContainerDied","Data":"0856cf4eae53acc455277886edbb76c6488f5945e858da2980c91352c7413ada"} Nov 25 07:29:18 crc kubenswrapper[5043]: I1125 07:29:18.079736 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mwjl" event={"ID":"4d08af94-ced7-41f2-a5da-4a5ab09436bb","Type":"ContainerStarted","Data":"7377cc18119fbe0f03ad85cb8184ee8c2b2ca93df8d6243e33532abe238decd3"} Nov 25 07:29:18 crc kubenswrapper[5043]: I1125 07:29:18.080826 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mwjl" Nov 25 07:29:18 crc kubenswrapper[5043]: I1125 07:29:18.133463 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mwjl" podStartSLOduration=1.885177799 podStartE2EDuration="9.133437723s" podCreationTimestamp="2025-11-25 07:29:09 +0000 UTC" firstStartedPulling="2025-11-25 07:29:10.627098433 +0000 UTC m=+814.795294154" lastFinishedPulling="2025-11-25 07:29:17.875358357 +0000 UTC m=+822.043554078" observedRunningTime="2025-11-25 07:29:18.128372757 +0000 UTC m=+822.296568498" watchObservedRunningTime="2025-11-25 07:29:18.133437723 +0000 UTC m=+822.301633454" Nov 25 07:29:19 crc kubenswrapper[5043]: I1125 07:29:19.097688 5043 generic.go:334] "Generic (PLEG): container finished" podID="781aa9bd-6e71-452c-8932-758f4c26cb40" containerID="01198f7d4bb6f9eb717072e70effe3d3dd9ed39b74b492474d4605527f12e989" exitCode=0 Nov 25 07:29:19 crc kubenswrapper[5043]: I1125 07:29:19.097789 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdt6k" event={"ID":"781aa9bd-6e71-452c-8932-758f4c26cb40","Type":"ContainerDied","Data":"01198f7d4bb6f9eb717072e70effe3d3dd9ed39b74b492474d4605527f12e989"} Nov 25 07:29:20 crc kubenswrapper[5043]: I1125 07:29:20.110981 5043 generic.go:334] "Generic (PLEG): container finished" podID="781aa9bd-6e71-452c-8932-758f4c26cb40" containerID="2f45eeece0804bd8e3be9b55371ad1c2d45405437020d4bbdc0e2929632667c9" exitCode=0 Nov 25 07:29:20 crc kubenswrapper[5043]: I1125 07:29:20.111098 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdt6k" event={"ID":"781aa9bd-6e71-452c-8932-758f4c26cb40","Type":"ContainerDied","Data":"2f45eeece0804bd8e3be9b55371ad1c2d45405437020d4bbdc0e2929632667c9"} Nov 25 07:29:20 crc kubenswrapper[5043]: I1125 07:29:20.319493 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-fkqdz" Nov 25 07:29:21 crc kubenswrapper[5043]: I1125 07:29:21.125652 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdt6k" event={"ID":"781aa9bd-6e71-452c-8932-758f4c26cb40","Type":"ContainerStarted","Data":"66958381c87b28dba31fbb9f3dec8f73d7c0b1f532776dd26cfdbbe31044cddc"} Nov 25 07:29:21 crc kubenswrapper[5043]: I1125 07:29:21.126844 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdt6k" event={"ID":"781aa9bd-6e71-452c-8932-758f4c26cb40","Type":"ContainerStarted","Data":"f3e91709161f7ce07304b02386dfd6e2c18636362b92d81ca9166123c757489a"} Nov 25 07:29:21 crc kubenswrapper[5043]: I1125 07:29:21.126861 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdt6k" event={"ID":"781aa9bd-6e71-452c-8932-758f4c26cb40","Type":"ContainerStarted","Data":"06e61a6d5d81c642ad1b3f8d10f4cdb9433ce8b95fe9af455a6944949460c69e"} Nov 25 07:29:21 crc kubenswrapper[5043]: I1125 07:29:21.126869 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdt6k" event={"ID":"781aa9bd-6e71-452c-8932-758f4c26cb40","Type":"ContainerStarted","Data":"957104f2e04d9cf9c17c16b9cb8a2b72aee37ad7ce3be7a1712f8415efd4ad05"} Nov 25 07:29:21 crc kubenswrapper[5043]: I1125 07:29:21.126887 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdt6k" event={"ID":"781aa9bd-6e71-452c-8932-758f4c26cb40","Type":"ContainerStarted","Data":"4c5fff92161a0156783e3f4f6306ee160327124c9c46cb59bdfb7bfc490ff3cc"} Nov 25 07:29:22 crc kubenswrapper[5043]: I1125 07:29:22.136122 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdt6k" event={"ID":"781aa9bd-6e71-452c-8932-758f4c26cb40","Type":"ContainerStarted","Data":"e83a3d668feb96192bcb183fcf5e764055641a66c708da745e13eef722efebd0"} Nov 25 07:29:22 crc kubenswrapper[5043]: I1125 07:29:22.136509 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:22 crc kubenswrapper[5043]: I1125 07:29:22.170320 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-tdt6k" podStartSLOduration=6.233438412 podStartE2EDuration="13.170295131s" podCreationTimestamp="2025-11-25 07:29:09 +0000 UTC" firstStartedPulling="2025-11-25 07:29:10.908174055 +0000 UTC m=+815.076369776" lastFinishedPulling="2025-11-25 07:29:17.845030774 +0000 UTC m=+822.013226495" observedRunningTime="2025-11-25 07:29:22.165668017 +0000 UTC m=+826.333863758" watchObservedRunningTime="2025-11-25 07:29:22.170295131 +0000 UTC m=+826.338490892" Nov 25 07:29:24 crc kubenswrapper[5043]: I1125 07:29:24.148787 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2kmq8"] Nov 25 07:29:24 crc kubenswrapper[5043]: I1125 07:29:24.152068 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2kmq8" Nov 25 07:29:24 crc kubenswrapper[5043]: I1125 07:29:24.161574 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2kmq8"] Nov 25 07:29:24 crc kubenswrapper[5043]: I1125 07:29:24.190750 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4039c865-bb15-4063-8d81-ffdf91dd986d-utilities\") pod \"certified-operators-2kmq8\" (UID: \"4039c865-bb15-4063-8d81-ffdf91dd986d\") " pod="openshift-marketplace/certified-operators-2kmq8" Nov 25 07:29:24 crc kubenswrapper[5043]: I1125 07:29:24.190866 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4039c865-bb15-4063-8d81-ffdf91dd986d-catalog-content\") pod \"certified-operators-2kmq8\" (UID: \"4039c865-bb15-4063-8d81-ffdf91dd986d\") " pod="openshift-marketplace/certified-operators-2kmq8" Nov 25 07:29:24 crc kubenswrapper[5043]: I1125 07:29:24.190904 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dskrt\" (UniqueName: \"kubernetes.io/projected/4039c865-bb15-4063-8d81-ffdf91dd986d-kube-api-access-dskrt\") pod \"certified-operators-2kmq8\" (UID: \"4039c865-bb15-4063-8d81-ffdf91dd986d\") " pod="openshift-marketplace/certified-operators-2kmq8" Nov 25 07:29:24 crc kubenswrapper[5043]: I1125 07:29:24.292668 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4039c865-bb15-4063-8d81-ffdf91dd986d-catalog-content\") pod \"certified-operators-2kmq8\" (UID: \"4039c865-bb15-4063-8d81-ffdf91dd986d\") " pod="openshift-marketplace/certified-operators-2kmq8" Nov 25 07:29:24 crc kubenswrapper[5043]: I1125 07:29:24.292914 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dskrt\" (UniqueName: \"kubernetes.io/projected/4039c865-bb15-4063-8d81-ffdf91dd986d-kube-api-access-dskrt\") pod \"certified-operators-2kmq8\" (UID: \"4039c865-bb15-4063-8d81-ffdf91dd986d\") " pod="openshift-marketplace/certified-operators-2kmq8" Nov 25 07:29:24 crc kubenswrapper[5043]: I1125 07:29:24.293062 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4039c865-bb15-4063-8d81-ffdf91dd986d-utilities\") pod \"certified-operators-2kmq8\" (UID: \"4039c865-bb15-4063-8d81-ffdf91dd986d\") " pod="openshift-marketplace/certified-operators-2kmq8" Nov 25 07:29:24 crc kubenswrapper[5043]: I1125 07:29:24.293310 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4039c865-bb15-4063-8d81-ffdf91dd986d-catalog-content\") pod \"certified-operators-2kmq8\" (UID: \"4039c865-bb15-4063-8d81-ffdf91dd986d\") " pod="openshift-marketplace/certified-operators-2kmq8" Nov 25 07:29:24 crc kubenswrapper[5043]: I1125 07:29:24.293447 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4039c865-bb15-4063-8d81-ffdf91dd986d-utilities\") pod \"certified-operators-2kmq8\" (UID: \"4039c865-bb15-4063-8d81-ffdf91dd986d\") " pod="openshift-marketplace/certified-operators-2kmq8" Nov 25 07:29:24 crc kubenswrapper[5043]: I1125 07:29:24.318043 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dskrt\" (UniqueName: \"kubernetes.io/projected/4039c865-bb15-4063-8d81-ffdf91dd986d-kube-api-access-dskrt\") pod \"certified-operators-2kmq8\" (UID: \"4039c865-bb15-4063-8d81-ffdf91dd986d\") " pod="openshift-marketplace/certified-operators-2kmq8" Nov 25 07:29:24 crc kubenswrapper[5043]: I1125 07:29:24.471247 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2kmq8" Nov 25 07:29:24 crc kubenswrapper[5043]: I1125 07:29:24.905796 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2kmq8"] Nov 25 07:29:24 crc kubenswrapper[5043]: W1125 07:29:24.916207 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4039c865_bb15_4063_8d81_ffdf91dd986d.slice/crio-dae08dd9bae4720457ed04a249fd3b8cde016d84ba9ea53a89a8163e0dba9462 WatchSource:0}: Error finding container dae08dd9bae4720457ed04a249fd3b8cde016d84ba9ea53a89a8163e0dba9462: Status 404 returned error can't find the container with id dae08dd9bae4720457ed04a249fd3b8cde016d84ba9ea53a89a8163e0dba9462 Nov 25 07:29:25 crc kubenswrapper[5043]: I1125 07:29:25.161397 5043 generic.go:334] "Generic (PLEG): container finished" podID="4039c865-bb15-4063-8d81-ffdf91dd986d" containerID="373973cea7a60ce0db605974e2865a9956ce3cdf41aa312d98e8dca299e87e0d" exitCode=0 Nov 25 07:29:25 crc kubenswrapper[5043]: I1125 07:29:25.161521 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kmq8" event={"ID":"4039c865-bb15-4063-8d81-ffdf91dd986d","Type":"ContainerDied","Data":"373973cea7a60ce0db605974e2865a9956ce3cdf41aa312d98e8dca299e87e0d"} Nov 25 07:29:25 crc kubenswrapper[5043]: I1125 07:29:25.164391 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kmq8" event={"ID":"4039c865-bb15-4063-8d81-ffdf91dd986d","Type":"ContainerStarted","Data":"dae08dd9bae4720457ed04a249fd3b8cde016d84ba9ea53a89a8163e0dba9462"} Nov 25 07:29:25 crc kubenswrapper[5043]: I1125 07:29:25.786901 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:25 crc kubenswrapper[5043]: I1125 07:29:25.831969 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:26 crc kubenswrapper[5043]: I1125 07:29:26.172311 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kmq8" event={"ID":"4039c865-bb15-4063-8d81-ffdf91dd986d","Type":"ContainerStarted","Data":"31ab5ec0d9005c2478dc7b46380f44f1fefed842943364cfad55acbc5b621c6b"} Nov 25 07:29:27 crc kubenswrapper[5043]: I1125 07:29:27.184026 5043 generic.go:334] "Generic (PLEG): container finished" podID="4039c865-bb15-4063-8d81-ffdf91dd986d" containerID="31ab5ec0d9005c2478dc7b46380f44f1fefed842943364cfad55acbc5b621c6b" exitCode=0 Nov 25 07:29:27 crc kubenswrapper[5043]: I1125 07:29:27.184140 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kmq8" event={"ID":"4039c865-bb15-4063-8d81-ffdf91dd986d","Type":"ContainerDied","Data":"31ab5ec0d9005c2478dc7b46380f44f1fefed842943364cfad55acbc5b621c6b"} Nov 25 07:29:28 crc kubenswrapper[5043]: I1125 07:29:28.193239 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kmq8" event={"ID":"4039c865-bb15-4063-8d81-ffdf91dd986d","Type":"ContainerStarted","Data":"b638283b8b94cd81afbd445d085d894a22628fcb008e2f763e49492f30130dc0"} Nov 25 07:29:28 crc kubenswrapper[5043]: I1125 07:29:28.222277 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2kmq8" podStartSLOduration=1.735679283 podStartE2EDuration="4.222251403s" podCreationTimestamp="2025-11-25 07:29:24 +0000 UTC" firstStartedPulling="2025-11-25 07:29:25.164899086 +0000 UTC m=+829.333094827" lastFinishedPulling="2025-11-25 07:29:27.651471226 +0000 UTC m=+831.819666947" observedRunningTime="2025-11-25 07:29:28.221288658 +0000 UTC m=+832.389484409" watchObservedRunningTime="2025-11-25 07:29:28.222251403 +0000 UTC m=+832.390447144" Nov 25 07:29:30 crc kubenswrapper[5043]: I1125 07:29:30.180512 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-4mwjl" Nov 25 07:29:30 crc kubenswrapper[5043]: I1125 07:29:30.791511 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-tdt6k" Nov 25 07:29:31 crc kubenswrapper[5043]: I1125 07:29:31.808976 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-8sqcm" Nov 25 07:29:34 crc kubenswrapper[5043]: I1125 07:29:34.471943 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2kmq8" Nov 25 07:29:34 crc kubenswrapper[5043]: I1125 07:29:34.472774 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2kmq8" Nov 25 07:29:34 crc kubenswrapper[5043]: I1125 07:29:34.523998 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2kmq8" Nov 25 07:29:35 crc kubenswrapper[5043]: I1125 07:29:35.277743 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2kmq8" Nov 25 07:29:39 crc kubenswrapper[5043]: I1125 07:29:39.340995 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5l5dq"] Nov 25 07:29:39 crc kubenswrapper[5043]: I1125 07:29:39.342377 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5l5dq" Nov 25 07:29:39 crc kubenswrapper[5043]: I1125 07:29:39.343951 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 07:29:39 crc kubenswrapper[5043]: I1125 07:29:39.344667 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 07:29:39 crc kubenswrapper[5043]: I1125 07:29:39.344707 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-2ggpd" Nov 25 07:29:39 crc kubenswrapper[5043]: I1125 07:29:39.351478 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5l5dq"] Nov 25 07:29:39 crc kubenswrapper[5043]: I1125 07:29:39.411347 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sk9n\" (UniqueName: \"kubernetes.io/projected/19b11f01-6c92-48f2-a21c-1dc2c8658865-kube-api-access-4sk9n\") pod \"openstack-operator-index-5l5dq\" (UID: \"19b11f01-6c92-48f2-a21c-1dc2c8658865\") " pod="openstack-operators/openstack-operator-index-5l5dq" Nov 25 07:29:39 crc kubenswrapper[5043]: I1125 07:29:39.513234 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sk9n\" (UniqueName: \"kubernetes.io/projected/19b11f01-6c92-48f2-a21c-1dc2c8658865-kube-api-access-4sk9n\") pod \"openstack-operator-index-5l5dq\" (UID: \"19b11f01-6c92-48f2-a21c-1dc2c8658865\") " pod="openstack-operators/openstack-operator-index-5l5dq" Nov 25 07:29:39 crc kubenswrapper[5043]: I1125 07:29:39.530697 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sk9n\" (UniqueName: \"kubernetes.io/projected/19b11f01-6c92-48f2-a21c-1dc2c8658865-kube-api-access-4sk9n\") pod \"openstack-operator-index-5l5dq\" (UID: \"19b11f01-6c92-48f2-a21c-1dc2c8658865\") " pod="openstack-operators/openstack-operator-index-5l5dq" Nov 25 07:29:39 crc kubenswrapper[5043]: I1125 07:29:39.695344 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5l5dq" Nov 25 07:29:39 crc kubenswrapper[5043]: I1125 07:29:39.939445 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2kmq8"] Nov 25 07:29:39 crc kubenswrapper[5043]: I1125 07:29:39.940048 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2kmq8" podUID="4039c865-bb15-4063-8d81-ffdf91dd986d" containerName="registry-server" containerID="cri-o://b638283b8b94cd81afbd445d085d894a22628fcb008e2f763e49492f30130dc0" gracePeriod=2 Nov 25 07:29:40 crc kubenswrapper[5043]: I1125 07:29:40.175052 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5l5dq"] Nov 25 07:29:40 crc kubenswrapper[5043]: W1125 07:29:40.187582 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19b11f01_6c92_48f2_a21c_1dc2c8658865.slice/crio-1dd9089a7813377b834ea031cf7c29d52665de560d6170f442cdb6fcf25ede87 WatchSource:0}: Error finding container 1dd9089a7813377b834ea031cf7c29d52665de560d6170f442cdb6fcf25ede87: Status 404 returned error can't find the container with id 1dd9089a7813377b834ea031cf7c29d52665de560d6170f442cdb6fcf25ede87 Nov 25 07:29:40 crc kubenswrapper[5043]: I1125 07:29:40.280396 5043 generic.go:334] "Generic (PLEG): container finished" podID="4039c865-bb15-4063-8d81-ffdf91dd986d" containerID="b638283b8b94cd81afbd445d085d894a22628fcb008e2f763e49492f30130dc0" exitCode=0 Nov 25 07:29:40 crc kubenswrapper[5043]: I1125 07:29:40.280462 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kmq8" event={"ID":"4039c865-bb15-4063-8d81-ffdf91dd986d","Type":"ContainerDied","Data":"b638283b8b94cd81afbd445d085d894a22628fcb008e2f763e49492f30130dc0"} Nov 25 07:29:40 crc kubenswrapper[5043]: I1125 07:29:40.281433 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5l5dq" event={"ID":"19b11f01-6c92-48f2-a21c-1dc2c8658865","Type":"ContainerStarted","Data":"1dd9089a7813377b834ea031cf7c29d52665de560d6170f442cdb6fcf25ede87"} Nov 25 07:29:40 crc kubenswrapper[5043]: I1125 07:29:40.336369 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2kmq8" Nov 25 07:29:40 crc kubenswrapper[5043]: I1125 07:29:40.431573 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4039c865-bb15-4063-8d81-ffdf91dd986d-catalog-content\") pod \"4039c865-bb15-4063-8d81-ffdf91dd986d\" (UID: \"4039c865-bb15-4063-8d81-ffdf91dd986d\") " Nov 25 07:29:40 crc kubenswrapper[5043]: I1125 07:29:40.431651 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4039c865-bb15-4063-8d81-ffdf91dd986d-utilities\") pod \"4039c865-bb15-4063-8d81-ffdf91dd986d\" (UID: \"4039c865-bb15-4063-8d81-ffdf91dd986d\") " Nov 25 07:29:40 crc kubenswrapper[5043]: I1125 07:29:40.431799 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dskrt\" (UniqueName: \"kubernetes.io/projected/4039c865-bb15-4063-8d81-ffdf91dd986d-kube-api-access-dskrt\") pod \"4039c865-bb15-4063-8d81-ffdf91dd986d\" (UID: \"4039c865-bb15-4063-8d81-ffdf91dd986d\") " Nov 25 07:29:40 crc kubenswrapper[5043]: I1125 07:29:40.434089 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4039c865-bb15-4063-8d81-ffdf91dd986d-utilities" (OuterVolumeSpecName: "utilities") pod "4039c865-bb15-4063-8d81-ffdf91dd986d" (UID: "4039c865-bb15-4063-8d81-ffdf91dd986d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:29:40 crc kubenswrapper[5043]: I1125 07:29:40.438896 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4039c865-bb15-4063-8d81-ffdf91dd986d-kube-api-access-dskrt" (OuterVolumeSpecName: "kube-api-access-dskrt") pod "4039c865-bb15-4063-8d81-ffdf91dd986d" (UID: "4039c865-bb15-4063-8d81-ffdf91dd986d"). InnerVolumeSpecName "kube-api-access-dskrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:29:40 crc kubenswrapper[5043]: I1125 07:29:40.478727 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4039c865-bb15-4063-8d81-ffdf91dd986d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4039c865-bb15-4063-8d81-ffdf91dd986d" (UID: "4039c865-bb15-4063-8d81-ffdf91dd986d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:29:40 crc kubenswrapper[5043]: I1125 07:29:40.533068 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4039c865-bb15-4063-8d81-ffdf91dd986d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:29:40 crc kubenswrapper[5043]: I1125 07:29:40.533099 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4039c865-bb15-4063-8d81-ffdf91dd986d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:29:40 crc kubenswrapper[5043]: I1125 07:29:40.533109 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dskrt\" (UniqueName: \"kubernetes.io/projected/4039c865-bb15-4063-8d81-ffdf91dd986d-kube-api-access-dskrt\") on node \"crc\" DevicePath \"\"" Nov 25 07:29:41 crc kubenswrapper[5043]: I1125 07:29:41.289939 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2kmq8" event={"ID":"4039c865-bb15-4063-8d81-ffdf91dd986d","Type":"ContainerDied","Data":"dae08dd9bae4720457ed04a249fd3b8cde016d84ba9ea53a89a8163e0dba9462"} Nov 25 07:29:41 crc kubenswrapper[5043]: I1125 07:29:41.289955 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2kmq8" Nov 25 07:29:41 crc kubenswrapper[5043]: I1125 07:29:41.290381 5043 scope.go:117] "RemoveContainer" containerID="b638283b8b94cd81afbd445d085d894a22628fcb008e2f763e49492f30130dc0" Nov 25 07:29:41 crc kubenswrapper[5043]: I1125 07:29:41.292048 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5l5dq" event={"ID":"19b11f01-6c92-48f2-a21c-1dc2c8658865","Type":"ContainerStarted","Data":"c039f38ebd4a2e7b78a6ba5d3039cee3bc09dccb7e890b0092d3efbaf75bea5a"} Nov 25 07:29:41 crc kubenswrapper[5043]: I1125 07:29:41.306152 5043 scope.go:117] "RemoveContainer" containerID="31ab5ec0d9005c2478dc7b46380f44f1fefed842943364cfad55acbc5b621c6b" Nov 25 07:29:41 crc kubenswrapper[5043]: I1125 07:29:41.318084 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2kmq8"] Nov 25 07:29:41 crc kubenswrapper[5043]: I1125 07:29:41.333443 5043 scope.go:117] "RemoveContainer" containerID="373973cea7a60ce0db605974e2865a9956ce3cdf41aa312d98e8dca299e87e0d" Nov 25 07:29:41 crc kubenswrapper[5043]: I1125 07:29:41.335416 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2kmq8"] Nov 25 07:29:41 crc kubenswrapper[5043]: I1125 07:29:41.353704 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5l5dq" podStartSLOduration=1.535055817 podStartE2EDuration="2.353685246s" podCreationTimestamp="2025-11-25 07:29:39 +0000 UTC" firstStartedPulling="2025-11-25 07:29:40.190922384 +0000 UTC m=+844.359118095" lastFinishedPulling="2025-11-25 07:29:41.009551793 +0000 UTC m=+845.177747524" observedRunningTime="2025-11-25 07:29:41.350316836 +0000 UTC m=+845.518512557" watchObservedRunningTime="2025-11-25 07:29:41.353685246 +0000 UTC m=+845.521880967" Nov 25 07:29:42 crc kubenswrapper[5043]: I1125 07:29:42.974711 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4039c865-bb15-4063-8d81-ffdf91dd986d" path="/var/lib/kubelet/pods/4039c865-bb15-4063-8d81-ffdf91dd986d/volumes" Nov 25 07:29:45 crc kubenswrapper[5043]: I1125 07:29:45.935523 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-5l5dq"] Nov 25 07:29:45 crc kubenswrapper[5043]: I1125 07:29:45.937117 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-5l5dq" podUID="19b11f01-6c92-48f2-a21c-1dc2c8658865" containerName="registry-server" containerID="cri-o://c039f38ebd4a2e7b78a6ba5d3039cee3bc09dccb7e890b0092d3efbaf75bea5a" gracePeriod=2 Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.305150 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5l5dq" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.333059 5043 generic.go:334] "Generic (PLEG): container finished" podID="19b11f01-6c92-48f2-a21c-1dc2c8658865" containerID="c039f38ebd4a2e7b78a6ba5d3039cee3bc09dccb7e890b0092d3efbaf75bea5a" exitCode=0 Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.333099 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5l5dq" event={"ID":"19b11f01-6c92-48f2-a21c-1dc2c8658865","Type":"ContainerDied","Data":"c039f38ebd4a2e7b78a6ba5d3039cee3bc09dccb7e890b0092d3efbaf75bea5a"} Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.333124 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5l5dq" event={"ID":"19b11f01-6c92-48f2-a21c-1dc2c8658865","Type":"ContainerDied","Data":"1dd9089a7813377b834ea031cf7c29d52665de560d6170f442cdb6fcf25ede87"} Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.333125 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5l5dq" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.333139 5043 scope.go:117] "RemoveContainer" containerID="c039f38ebd4a2e7b78a6ba5d3039cee3bc09dccb7e890b0092d3efbaf75bea5a" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.349975 5043 scope.go:117] "RemoveContainer" containerID="c039f38ebd4a2e7b78a6ba5d3039cee3bc09dccb7e890b0092d3efbaf75bea5a" Nov 25 07:29:46 crc kubenswrapper[5043]: E1125 07:29:46.350427 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c039f38ebd4a2e7b78a6ba5d3039cee3bc09dccb7e890b0092d3efbaf75bea5a\": container with ID starting with c039f38ebd4a2e7b78a6ba5d3039cee3bc09dccb7e890b0092d3efbaf75bea5a not found: ID does not exist" containerID="c039f38ebd4a2e7b78a6ba5d3039cee3bc09dccb7e890b0092d3efbaf75bea5a" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.350463 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c039f38ebd4a2e7b78a6ba5d3039cee3bc09dccb7e890b0092d3efbaf75bea5a"} err="failed to get container status \"c039f38ebd4a2e7b78a6ba5d3039cee3bc09dccb7e890b0092d3efbaf75bea5a\": rpc error: code = NotFound desc = could not find container \"c039f38ebd4a2e7b78a6ba5d3039cee3bc09dccb7e890b0092d3efbaf75bea5a\": container with ID starting with c039f38ebd4a2e7b78a6ba5d3039cee3bc09dccb7e890b0092d3efbaf75bea5a not found: ID does not exist" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.417400 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sk9n\" (UniqueName: \"kubernetes.io/projected/19b11f01-6c92-48f2-a21c-1dc2c8658865-kube-api-access-4sk9n\") pod \"19b11f01-6c92-48f2-a21c-1dc2c8658865\" (UID: \"19b11f01-6c92-48f2-a21c-1dc2c8658865\") " Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.423996 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19b11f01-6c92-48f2-a21c-1dc2c8658865-kube-api-access-4sk9n" (OuterVolumeSpecName: "kube-api-access-4sk9n") pod "19b11f01-6c92-48f2-a21c-1dc2c8658865" (UID: "19b11f01-6c92-48f2-a21c-1dc2c8658865"). InnerVolumeSpecName "kube-api-access-4sk9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.519589 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sk9n\" (UniqueName: \"kubernetes.io/projected/19b11f01-6c92-48f2-a21c-1dc2c8658865-kube-api-access-4sk9n\") on node \"crc\" DevicePath \"\"" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.689395 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-5l5dq"] Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.692996 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-5l5dq"] Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.746587 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vl25g"] Nov 25 07:29:46 crc kubenswrapper[5043]: E1125 07:29:46.747105 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19b11f01-6c92-48f2-a21c-1dc2c8658865" containerName="registry-server" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.747199 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="19b11f01-6c92-48f2-a21c-1dc2c8658865" containerName="registry-server" Nov 25 07:29:46 crc kubenswrapper[5043]: E1125 07:29:46.747266 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4039c865-bb15-4063-8d81-ffdf91dd986d" containerName="extract-utilities" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.747326 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4039c865-bb15-4063-8d81-ffdf91dd986d" containerName="extract-utilities" Nov 25 07:29:46 crc kubenswrapper[5043]: E1125 07:29:46.747397 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4039c865-bb15-4063-8d81-ffdf91dd986d" containerName="registry-server" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.747458 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4039c865-bb15-4063-8d81-ffdf91dd986d" containerName="registry-server" Nov 25 07:29:46 crc kubenswrapper[5043]: E1125 07:29:46.747533 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4039c865-bb15-4063-8d81-ffdf91dd986d" containerName="extract-content" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.747681 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4039c865-bb15-4063-8d81-ffdf91dd986d" containerName="extract-content" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.747907 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="19b11f01-6c92-48f2-a21c-1dc2c8658865" containerName="registry-server" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.747985 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4039c865-bb15-4063-8d81-ffdf91dd986d" containerName="registry-server" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.748512 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vl25g" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.752169 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.752488 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.752642 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-2ggpd" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.758074 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vl25g"] Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.823398 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5mfg\" (UniqueName: \"kubernetes.io/projected/d14cb4f9-dc65-4999-833a-475d3f735715-kube-api-access-f5mfg\") pod \"openstack-operator-index-vl25g\" (UID: \"d14cb4f9-dc65-4999-833a-475d3f735715\") " pod="openstack-operators/openstack-operator-index-vl25g" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.925575 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5mfg\" (UniqueName: \"kubernetes.io/projected/d14cb4f9-dc65-4999-833a-475d3f735715-kube-api-access-f5mfg\") pod \"openstack-operator-index-vl25g\" (UID: \"d14cb4f9-dc65-4999-833a-475d3f735715\") " pod="openstack-operators/openstack-operator-index-vl25g" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.951711 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5mfg\" (UniqueName: \"kubernetes.io/projected/d14cb4f9-dc65-4999-833a-475d3f735715-kube-api-access-f5mfg\") pod \"openstack-operator-index-vl25g\" (UID: \"d14cb4f9-dc65-4999-833a-475d3f735715\") " pod="openstack-operators/openstack-operator-index-vl25g" Nov 25 07:29:46 crc kubenswrapper[5043]: I1125 07:29:46.970993 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19b11f01-6c92-48f2-a21c-1dc2c8658865" path="/var/lib/kubelet/pods/19b11f01-6c92-48f2-a21c-1dc2c8658865/volumes" Nov 25 07:29:47 crc kubenswrapper[5043]: I1125 07:29:47.063961 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vl25g" Nov 25 07:29:47 crc kubenswrapper[5043]: I1125 07:29:47.479861 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vl25g"] Nov 25 07:29:48 crc kubenswrapper[5043]: I1125 07:29:48.350070 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vl25g" event={"ID":"d14cb4f9-dc65-4999-833a-475d3f735715","Type":"ContainerStarted","Data":"f7baeefaf89f97044b68e9cbdfd6c94ab0b504aec9592b4300b94d55af8c603c"} Nov 25 07:29:48 crc kubenswrapper[5043]: I1125 07:29:48.350754 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vl25g" event={"ID":"d14cb4f9-dc65-4999-833a-475d3f735715","Type":"ContainerStarted","Data":"f3682b5f35b391a2dd2fbaba61ff858b659c7d6a3cf3801710ed4f1ab691a4fc"} Nov 25 07:29:48 crc kubenswrapper[5043]: I1125 07:29:48.369240 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vl25g" podStartSLOduration=1.763002234 podStartE2EDuration="2.369221641s" podCreationTimestamp="2025-11-25 07:29:46 +0000 UTC" firstStartedPulling="2025-11-25 07:29:47.494755794 +0000 UTC m=+851.662951515" lastFinishedPulling="2025-11-25 07:29:48.100975201 +0000 UTC m=+852.269170922" observedRunningTime="2025-11-25 07:29:48.365379207 +0000 UTC m=+852.533574928" watchObservedRunningTime="2025-11-25 07:29:48.369221641 +0000 UTC m=+852.537417362" Nov 25 07:29:55 crc kubenswrapper[5043]: I1125 07:29:55.546122 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w2pjq"] Nov 25 07:29:55 crc kubenswrapper[5043]: I1125 07:29:55.547657 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2pjq" Nov 25 07:29:55 crc kubenswrapper[5043]: I1125 07:29:55.558879 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2pjq"] Nov 25 07:29:55 crc kubenswrapper[5043]: I1125 07:29:55.643272 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cac4e27-3572-4c74-86e1-4203f9404939-utilities\") pod \"community-operators-w2pjq\" (UID: \"8cac4e27-3572-4c74-86e1-4203f9404939\") " pod="openshift-marketplace/community-operators-w2pjq" Nov 25 07:29:55 crc kubenswrapper[5043]: I1125 07:29:55.643324 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgtwv\" (UniqueName: \"kubernetes.io/projected/8cac4e27-3572-4c74-86e1-4203f9404939-kube-api-access-vgtwv\") pod \"community-operators-w2pjq\" (UID: \"8cac4e27-3572-4c74-86e1-4203f9404939\") " pod="openshift-marketplace/community-operators-w2pjq" Nov 25 07:29:55 crc kubenswrapper[5043]: I1125 07:29:55.643355 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cac4e27-3572-4c74-86e1-4203f9404939-catalog-content\") pod \"community-operators-w2pjq\" (UID: \"8cac4e27-3572-4c74-86e1-4203f9404939\") " pod="openshift-marketplace/community-operators-w2pjq" Nov 25 07:29:55 crc kubenswrapper[5043]: I1125 07:29:55.744496 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cac4e27-3572-4c74-86e1-4203f9404939-utilities\") pod \"community-operators-w2pjq\" (UID: \"8cac4e27-3572-4c74-86e1-4203f9404939\") " pod="openshift-marketplace/community-operators-w2pjq" Nov 25 07:29:55 crc kubenswrapper[5043]: I1125 07:29:55.744546 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgtwv\" (UniqueName: \"kubernetes.io/projected/8cac4e27-3572-4c74-86e1-4203f9404939-kube-api-access-vgtwv\") pod \"community-operators-w2pjq\" (UID: \"8cac4e27-3572-4c74-86e1-4203f9404939\") " pod="openshift-marketplace/community-operators-w2pjq" Nov 25 07:29:55 crc kubenswrapper[5043]: I1125 07:29:55.744573 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cac4e27-3572-4c74-86e1-4203f9404939-catalog-content\") pod \"community-operators-w2pjq\" (UID: \"8cac4e27-3572-4c74-86e1-4203f9404939\") " pod="openshift-marketplace/community-operators-w2pjq" Nov 25 07:29:55 crc kubenswrapper[5043]: I1125 07:29:55.745032 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cac4e27-3572-4c74-86e1-4203f9404939-catalog-content\") pod \"community-operators-w2pjq\" (UID: \"8cac4e27-3572-4c74-86e1-4203f9404939\") " pod="openshift-marketplace/community-operators-w2pjq" Nov 25 07:29:55 crc kubenswrapper[5043]: I1125 07:29:55.746025 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cac4e27-3572-4c74-86e1-4203f9404939-utilities\") pod \"community-operators-w2pjq\" (UID: \"8cac4e27-3572-4c74-86e1-4203f9404939\") " pod="openshift-marketplace/community-operators-w2pjq" Nov 25 07:29:55 crc kubenswrapper[5043]: I1125 07:29:55.785714 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgtwv\" (UniqueName: \"kubernetes.io/projected/8cac4e27-3572-4c74-86e1-4203f9404939-kube-api-access-vgtwv\") pod \"community-operators-w2pjq\" (UID: \"8cac4e27-3572-4c74-86e1-4203f9404939\") " pod="openshift-marketplace/community-operators-w2pjq" Nov 25 07:29:55 crc kubenswrapper[5043]: I1125 07:29:55.868652 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2pjq" Nov 25 07:29:56 crc kubenswrapper[5043]: I1125 07:29:56.450853 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2pjq"] Nov 25 07:29:57 crc kubenswrapper[5043]: I1125 07:29:57.064685 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-vl25g" Nov 25 07:29:57 crc kubenswrapper[5043]: I1125 07:29:57.064912 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-vl25g" Nov 25 07:29:57 crc kubenswrapper[5043]: I1125 07:29:57.095422 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-vl25g" Nov 25 07:29:57 crc kubenswrapper[5043]: I1125 07:29:57.416454 5043 generic.go:334] "Generic (PLEG): container finished" podID="8cac4e27-3572-4c74-86e1-4203f9404939" containerID="4f3385fc9bd76c039c957473d3012d53324ad11fb188f489c898124477b9ff59" exitCode=0 Nov 25 07:29:57 crc kubenswrapper[5043]: I1125 07:29:57.416511 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2pjq" event={"ID":"8cac4e27-3572-4c74-86e1-4203f9404939","Type":"ContainerDied","Data":"4f3385fc9bd76c039c957473d3012d53324ad11fb188f489c898124477b9ff59"} Nov 25 07:29:57 crc kubenswrapper[5043]: I1125 07:29:57.416552 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2pjq" event={"ID":"8cac4e27-3572-4c74-86e1-4203f9404939","Type":"ContainerStarted","Data":"8a02059a8a5cb921c275910cd2923bea04066841d605aae14c90d7ecf8ab34d5"} Nov 25 07:29:57 crc kubenswrapper[5043]: I1125 07:29:57.455038 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-vl25g" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.023741 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb"] Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.025479 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.028321 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-7snwc" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.028942 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb"] Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.108410 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5be134-f74b-42f1-b99e-7ec2690c99c4-util\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb\" (UID: \"ee5be134-f74b-42f1-b99e-7ec2690c99c4\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.108457 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5be134-f74b-42f1-b99e-7ec2690c99c4-bundle\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb\" (UID: \"ee5be134-f74b-42f1-b99e-7ec2690c99c4\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.108526 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v85vd\" (UniqueName: \"kubernetes.io/projected/ee5be134-f74b-42f1-b99e-7ec2690c99c4-kube-api-access-v85vd\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb\" (UID: \"ee5be134-f74b-42f1-b99e-7ec2690c99c4\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.133288 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7"] Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.134507 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.139211 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7"] Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.145873 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.145991 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.209402 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kstxt\" (UniqueName: \"kubernetes.io/projected/31bedd18-64d9-4295-a081-c458536855b0-kube-api-access-kstxt\") pod \"collect-profiles-29400930-9plc7\" (UID: \"31bedd18-64d9-4295-a081-c458536855b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.209766 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31bedd18-64d9-4295-a081-c458536855b0-secret-volume\") pod \"collect-profiles-29400930-9plc7\" (UID: \"31bedd18-64d9-4295-a081-c458536855b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.209810 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v85vd\" (UniqueName: \"kubernetes.io/projected/ee5be134-f74b-42f1-b99e-7ec2690c99c4-kube-api-access-v85vd\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb\" (UID: \"ee5be134-f74b-42f1-b99e-7ec2690c99c4\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.210058 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31bedd18-64d9-4295-a081-c458536855b0-config-volume\") pod \"collect-profiles-29400930-9plc7\" (UID: \"31bedd18-64d9-4295-a081-c458536855b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.210106 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5be134-f74b-42f1-b99e-7ec2690c99c4-util\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb\" (UID: \"ee5be134-f74b-42f1-b99e-7ec2690c99c4\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.210225 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5be134-f74b-42f1-b99e-7ec2690c99c4-bundle\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb\" (UID: \"ee5be134-f74b-42f1-b99e-7ec2690c99c4\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.210526 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5be134-f74b-42f1-b99e-7ec2690c99c4-util\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb\" (UID: \"ee5be134-f74b-42f1-b99e-7ec2690c99c4\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.210676 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5be134-f74b-42f1-b99e-7ec2690c99c4-bundle\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb\" (UID: \"ee5be134-f74b-42f1-b99e-7ec2690c99c4\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.228017 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v85vd\" (UniqueName: \"kubernetes.io/projected/ee5be134-f74b-42f1-b99e-7ec2690c99c4-kube-api-access-v85vd\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb\" (UID: \"ee5be134-f74b-42f1-b99e-7ec2690c99c4\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.311375 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31bedd18-64d9-4295-a081-c458536855b0-config-volume\") pod \"collect-profiles-29400930-9plc7\" (UID: \"31bedd18-64d9-4295-a081-c458536855b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.311454 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kstxt\" (UniqueName: \"kubernetes.io/projected/31bedd18-64d9-4295-a081-c458536855b0-kube-api-access-kstxt\") pod \"collect-profiles-29400930-9plc7\" (UID: \"31bedd18-64d9-4295-a081-c458536855b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.311486 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31bedd18-64d9-4295-a081-c458536855b0-secret-volume\") pod \"collect-profiles-29400930-9plc7\" (UID: \"31bedd18-64d9-4295-a081-c458536855b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.312762 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31bedd18-64d9-4295-a081-c458536855b0-config-volume\") pod \"collect-profiles-29400930-9plc7\" (UID: \"31bedd18-64d9-4295-a081-c458536855b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.315524 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31bedd18-64d9-4295-a081-c458536855b0-secret-volume\") pod \"collect-profiles-29400930-9plc7\" (UID: \"31bedd18-64d9-4295-a081-c458536855b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.332865 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kstxt\" (UniqueName: \"kubernetes.io/projected/31bedd18-64d9-4295-a081-c458536855b0-kube-api-access-kstxt\") pod \"collect-profiles-29400930-9plc7\" (UID: \"31bedd18-64d9-4295-a081-c458536855b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.341052 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" Nov 25 07:30:00 crc kubenswrapper[5043]: I1125 07:30:00.465429 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7" Nov 25 07:30:01 crc kubenswrapper[5043]: I1125 07:30:01.441301 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb"] Nov 25 07:30:01 crc kubenswrapper[5043]: W1125 07:30:01.448130 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee5be134_f74b_42f1_b99e_7ec2690c99c4.slice/crio-9582684d11919030d981ca290d82ab568c37f9c21382aa5aba355f978bcfbbc4 WatchSource:0}: Error finding container 9582684d11919030d981ca290d82ab568c37f9c21382aa5aba355f978bcfbbc4: Status 404 returned error can't find the container with id 9582684d11919030d981ca290d82ab568c37f9c21382aa5aba355f978bcfbbc4 Nov 25 07:30:01 crc kubenswrapper[5043]: I1125 07:30:01.456642 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" event={"ID":"ee5be134-f74b-42f1-b99e-7ec2690c99c4","Type":"ContainerStarted","Data":"9582684d11919030d981ca290d82ab568c37f9c21382aa5aba355f978bcfbbc4"} Nov 25 07:30:01 crc kubenswrapper[5043]: I1125 07:30:01.461099 5043 generic.go:334] "Generic (PLEG): container finished" podID="8cac4e27-3572-4c74-86e1-4203f9404939" containerID="f83f872dcf6b2cd158d9f1524feb38d6ba022db6e5b415430aa04c0b0a330cc0" exitCode=0 Nov 25 07:30:01 crc kubenswrapper[5043]: I1125 07:30:01.461159 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2pjq" event={"ID":"8cac4e27-3572-4c74-86e1-4203f9404939","Type":"ContainerDied","Data":"f83f872dcf6b2cd158d9f1524feb38d6ba022db6e5b415430aa04c0b0a330cc0"} Nov 25 07:30:01 crc kubenswrapper[5043]: I1125 07:30:01.498802 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7"] Nov 25 07:30:01 crc kubenswrapper[5043]: W1125 07:30:01.536744 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31bedd18_64d9_4295_a081_c458536855b0.slice/crio-19060a29d2c51028f7b25b5f28ed1efe0f0d2931f8f42af7e0bc79a6d377b27f WatchSource:0}: Error finding container 19060a29d2c51028f7b25b5f28ed1efe0f0d2931f8f42af7e0bc79a6d377b27f: Status 404 returned error can't find the container with id 19060a29d2c51028f7b25b5f28ed1efe0f0d2931f8f42af7e0bc79a6d377b27f Nov 25 07:30:02 crc kubenswrapper[5043]: I1125 07:30:02.468561 5043 generic.go:334] "Generic (PLEG): container finished" podID="ee5be134-f74b-42f1-b99e-7ec2690c99c4" containerID="fd113f22c59429f465758a15d10c63c2bb2ae411e24926e4fed2072e75c6c1ba" exitCode=0 Nov 25 07:30:02 crc kubenswrapper[5043]: I1125 07:30:02.468644 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" event={"ID":"ee5be134-f74b-42f1-b99e-7ec2690c99c4","Type":"ContainerDied","Data":"fd113f22c59429f465758a15d10c63c2bb2ae411e24926e4fed2072e75c6c1ba"} Nov 25 07:30:02 crc kubenswrapper[5043]: I1125 07:30:02.471897 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2pjq" event={"ID":"8cac4e27-3572-4c74-86e1-4203f9404939","Type":"ContainerStarted","Data":"a61d58d499cf509491abd91451e874c60106243593e9542711bc6d563eece427"} Nov 25 07:30:02 crc kubenswrapper[5043]: I1125 07:30:02.473990 5043 generic.go:334] "Generic (PLEG): container finished" podID="31bedd18-64d9-4295-a081-c458536855b0" containerID="8d1a52a9331fbe4ac0b618c502eef7712bebf7db0c8ee7edd606f98ea6047545" exitCode=0 Nov 25 07:30:02 crc kubenswrapper[5043]: I1125 07:30:02.474036 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7" event={"ID":"31bedd18-64d9-4295-a081-c458536855b0","Type":"ContainerDied","Data":"8d1a52a9331fbe4ac0b618c502eef7712bebf7db0c8ee7edd606f98ea6047545"} Nov 25 07:30:02 crc kubenswrapper[5043]: I1125 07:30:02.474069 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7" event={"ID":"31bedd18-64d9-4295-a081-c458536855b0","Type":"ContainerStarted","Data":"19060a29d2c51028f7b25b5f28ed1efe0f0d2931f8f42af7e0bc79a6d377b27f"} Nov 25 07:30:02 crc kubenswrapper[5043]: I1125 07:30:02.498989 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w2pjq" podStartSLOduration=3.070282907 podStartE2EDuration="7.498969686s" podCreationTimestamp="2025-11-25 07:29:55 +0000 UTC" firstStartedPulling="2025-11-25 07:29:57.418646754 +0000 UTC m=+861.586842475" lastFinishedPulling="2025-11-25 07:30:01.847333533 +0000 UTC m=+866.015529254" observedRunningTime="2025-11-25 07:30:02.498144665 +0000 UTC m=+866.666340396" watchObservedRunningTime="2025-11-25 07:30:02.498969686 +0000 UTC m=+866.667165417" Nov 25 07:30:03 crc kubenswrapper[5043]: I1125 07:30:03.849353 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7" Nov 25 07:30:03 crc kubenswrapper[5043]: I1125 07:30:03.966962 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31bedd18-64d9-4295-a081-c458536855b0-config-volume\") pod \"31bedd18-64d9-4295-a081-c458536855b0\" (UID: \"31bedd18-64d9-4295-a081-c458536855b0\") " Nov 25 07:30:03 crc kubenswrapper[5043]: I1125 07:30:03.967020 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kstxt\" (UniqueName: \"kubernetes.io/projected/31bedd18-64d9-4295-a081-c458536855b0-kube-api-access-kstxt\") pod \"31bedd18-64d9-4295-a081-c458536855b0\" (UID: \"31bedd18-64d9-4295-a081-c458536855b0\") " Nov 25 07:30:03 crc kubenswrapper[5043]: I1125 07:30:03.967083 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31bedd18-64d9-4295-a081-c458536855b0-secret-volume\") pod \"31bedd18-64d9-4295-a081-c458536855b0\" (UID: \"31bedd18-64d9-4295-a081-c458536855b0\") " Nov 25 07:30:03 crc kubenswrapper[5043]: I1125 07:30:03.967808 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31bedd18-64d9-4295-a081-c458536855b0-config-volume" (OuterVolumeSpecName: "config-volume") pod "31bedd18-64d9-4295-a081-c458536855b0" (UID: "31bedd18-64d9-4295-a081-c458536855b0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:30:03 crc kubenswrapper[5043]: I1125 07:30:03.972249 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bedd18-64d9-4295-a081-c458536855b0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "31bedd18-64d9-4295-a081-c458536855b0" (UID: "31bedd18-64d9-4295-a081-c458536855b0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:30:03 crc kubenswrapper[5043]: I1125 07:30:03.972362 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31bedd18-64d9-4295-a081-c458536855b0-kube-api-access-kstxt" (OuterVolumeSpecName: "kube-api-access-kstxt") pod "31bedd18-64d9-4295-a081-c458536855b0" (UID: "31bedd18-64d9-4295-a081-c458536855b0"). InnerVolumeSpecName "kube-api-access-kstxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:30:04 crc kubenswrapper[5043]: I1125 07:30:04.068710 5043 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31bedd18-64d9-4295-a081-c458536855b0-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 07:30:04 crc kubenswrapper[5043]: I1125 07:30:04.069001 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kstxt\" (UniqueName: \"kubernetes.io/projected/31bedd18-64d9-4295-a081-c458536855b0-kube-api-access-kstxt\") on node \"crc\" DevicePath \"\"" Nov 25 07:30:04 crc kubenswrapper[5043]: I1125 07:30:04.069014 5043 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31bedd18-64d9-4295-a081-c458536855b0-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 07:30:04 crc kubenswrapper[5043]: I1125 07:30:04.489421 5043 generic.go:334] "Generic (PLEG): container finished" podID="ee5be134-f74b-42f1-b99e-7ec2690c99c4" containerID="18e4be4415a65fa5fde53ef608a6e7e6550b9153c552a2e8759fdbd8608ef2c6" exitCode=0 Nov 25 07:30:04 crc kubenswrapper[5043]: I1125 07:30:04.489531 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" event={"ID":"ee5be134-f74b-42f1-b99e-7ec2690c99c4","Type":"ContainerDied","Data":"18e4be4415a65fa5fde53ef608a6e7e6550b9153c552a2e8759fdbd8608ef2c6"} Nov 25 07:30:04 crc kubenswrapper[5043]: I1125 07:30:04.490659 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7" event={"ID":"31bedd18-64d9-4295-a081-c458536855b0","Type":"ContainerDied","Data":"19060a29d2c51028f7b25b5f28ed1efe0f0d2931f8f42af7e0bc79a6d377b27f"} Nov 25 07:30:04 crc kubenswrapper[5043]: I1125 07:30:04.490686 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19060a29d2c51028f7b25b5f28ed1efe0f0d2931f8f42af7e0bc79a6d377b27f" Nov 25 07:30:04 crc kubenswrapper[5043]: I1125 07:30:04.490745 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7" Nov 25 07:30:05 crc kubenswrapper[5043]: I1125 07:30:05.501932 5043 generic.go:334] "Generic (PLEG): container finished" podID="ee5be134-f74b-42f1-b99e-7ec2690c99c4" containerID="0feeb76cd8499faa6a597e6e950e0c891eac36a966b4e3f53a4674772fbcd7b0" exitCode=0 Nov 25 07:30:05 crc kubenswrapper[5043]: I1125 07:30:05.501977 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" event={"ID":"ee5be134-f74b-42f1-b99e-7ec2690c99c4","Type":"ContainerDied","Data":"0feeb76cd8499faa6a597e6e950e0c891eac36a966b4e3f53a4674772fbcd7b0"} Nov 25 07:30:05 crc kubenswrapper[5043]: I1125 07:30:05.869336 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w2pjq" Nov 25 07:30:05 crc kubenswrapper[5043]: I1125 07:30:05.869683 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w2pjq" Nov 25 07:30:05 crc kubenswrapper[5043]: I1125 07:30:05.940085 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w2pjq" Nov 25 07:30:06 crc kubenswrapper[5043]: I1125 07:30:06.546268 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w2pjq" Nov 25 07:30:06 crc kubenswrapper[5043]: I1125 07:30:06.782209 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" Nov 25 07:30:06 crc kubenswrapper[5043]: I1125 07:30:06.909580 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v85vd\" (UniqueName: \"kubernetes.io/projected/ee5be134-f74b-42f1-b99e-7ec2690c99c4-kube-api-access-v85vd\") pod \"ee5be134-f74b-42f1-b99e-7ec2690c99c4\" (UID: \"ee5be134-f74b-42f1-b99e-7ec2690c99c4\") " Nov 25 07:30:06 crc kubenswrapper[5043]: I1125 07:30:06.910182 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5be134-f74b-42f1-b99e-7ec2690c99c4-util\") pod \"ee5be134-f74b-42f1-b99e-7ec2690c99c4\" (UID: \"ee5be134-f74b-42f1-b99e-7ec2690c99c4\") " Nov 25 07:30:06 crc kubenswrapper[5043]: I1125 07:30:06.910229 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5be134-f74b-42f1-b99e-7ec2690c99c4-bundle\") pod \"ee5be134-f74b-42f1-b99e-7ec2690c99c4\" (UID: \"ee5be134-f74b-42f1-b99e-7ec2690c99c4\") " Nov 25 07:30:06 crc kubenswrapper[5043]: I1125 07:30:06.911593 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee5be134-f74b-42f1-b99e-7ec2690c99c4-bundle" (OuterVolumeSpecName: "bundle") pod "ee5be134-f74b-42f1-b99e-7ec2690c99c4" (UID: "ee5be134-f74b-42f1-b99e-7ec2690c99c4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:30:06 crc kubenswrapper[5043]: I1125 07:30:06.915707 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee5be134-f74b-42f1-b99e-7ec2690c99c4-kube-api-access-v85vd" (OuterVolumeSpecName: "kube-api-access-v85vd") pod "ee5be134-f74b-42f1-b99e-7ec2690c99c4" (UID: "ee5be134-f74b-42f1-b99e-7ec2690c99c4"). InnerVolumeSpecName "kube-api-access-v85vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:30:06 crc kubenswrapper[5043]: I1125 07:30:06.940262 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee5be134-f74b-42f1-b99e-7ec2690c99c4-util" (OuterVolumeSpecName: "util") pod "ee5be134-f74b-42f1-b99e-7ec2690c99c4" (UID: "ee5be134-f74b-42f1-b99e-7ec2690c99c4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:30:07 crc kubenswrapper[5043]: I1125 07:30:07.012976 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v85vd\" (UniqueName: \"kubernetes.io/projected/ee5be134-f74b-42f1-b99e-7ec2690c99c4-kube-api-access-v85vd\") on node \"crc\" DevicePath \"\"" Nov 25 07:30:07 crc kubenswrapper[5043]: I1125 07:30:07.013028 5043 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5be134-f74b-42f1-b99e-7ec2690c99c4-util\") on node \"crc\" DevicePath \"\"" Nov 25 07:30:07 crc kubenswrapper[5043]: I1125 07:30:07.013049 5043 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5be134-f74b-42f1-b99e-7ec2690c99c4-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:30:07 crc kubenswrapper[5043]: I1125 07:30:07.518723 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" event={"ID":"ee5be134-f74b-42f1-b99e-7ec2690c99c4","Type":"ContainerDied","Data":"9582684d11919030d981ca290d82ab568c37f9c21382aa5aba355f978bcfbbc4"} Nov 25 07:30:07 crc kubenswrapper[5043]: I1125 07:30:07.519156 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9582684d11919030d981ca290d82ab568c37f9c21382aa5aba355f978bcfbbc4" Nov 25 07:30:07 crc kubenswrapper[5043]: I1125 07:30:07.518851 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb" Nov 25 07:30:07 crc kubenswrapper[5043]: I1125 07:30:07.587902 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2pjq"] Nov 25 07:30:08 crc kubenswrapper[5043]: I1125 07:30:08.142768 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r4dn4"] Nov 25 07:30:08 crc kubenswrapper[5043]: I1125 07:30:08.144134 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r4dn4" podUID="b55d9c72-26da-44b8-9feb-8a130596e568" containerName="registry-server" containerID="cri-o://21610d1b342225282b8db352fed7d8fe60dbee6006cb819a7b988c296c342209" gracePeriod=2 Nov 25 07:30:08 crc kubenswrapper[5043]: I1125 07:30:08.527058 5043 generic.go:334] "Generic (PLEG): container finished" podID="b55d9c72-26da-44b8-9feb-8a130596e568" containerID="21610d1b342225282b8db352fed7d8fe60dbee6006cb819a7b988c296c342209" exitCode=0 Nov 25 07:30:08 crc kubenswrapper[5043]: I1125 07:30:08.527134 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4dn4" event={"ID":"b55d9c72-26da-44b8-9feb-8a130596e568","Type":"ContainerDied","Data":"21610d1b342225282b8db352fed7d8fe60dbee6006cb819a7b988c296c342209"} Nov 25 07:30:08 crc kubenswrapper[5043]: I1125 07:30:08.527185 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r4dn4" event={"ID":"b55d9c72-26da-44b8-9feb-8a130596e568","Type":"ContainerDied","Data":"6261a54d37dbfcdd9a4a8b735af29df7af38383c763d9c61f47c3c767276f220"} Nov 25 07:30:08 crc kubenswrapper[5043]: I1125 07:30:08.527202 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6261a54d37dbfcdd9a4a8b735af29df7af38383c763d9c61f47c3c767276f220" Nov 25 07:30:08 crc kubenswrapper[5043]: I1125 07:30:08.563390 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4dn4" Nov 25 07:30:08 crc kubenswrapper[5043]: I1125 07:30:08.642007 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b55d9c72-26da-44b8-9feb-8a130596e568-utilities\") pod \"b55d9c72-26da-44b8-9feb-8a130596e568\" (UID: \"b55d9c72-26da-44b8-9feb-8a130596e568\") " Nov 25 07:30:08 crc kubenswrapper[5043]: I1125 07:30:08.642448 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b55d9c72-26da-44b8-9feb-8a130596e568-catalog-content\") pod \"b55d9c72-26da-44b8-9feb-8a130596e568\" (UID: \"b55d9c72-26da-44b8-9feb-8a130596e568\") " Nov 25 07:30:08 crc kubenswrapper[5043]: I1125 07:30:08.642493 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pglw\" (UniqueName: \"kubernetes.io/projected/b55d9c72-26da-44b8-9feb-8a130596e568-kube-api-access-8pglw\") pod \"b55d9c72-26da-44b8-9feb-8a130596e568\" (UID: \"b55d9c72-26da-44b8-9feb-8a130596e568\") " Nov 25 07:30:08 crc kubenswrapper[5043]: I1125 07:30:08.642921 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b55d9c72-26da-44b8-9feb-8a130596e568-utilities" (OuterVolumeSpecName: "utilities") pod "b55d9c72-26da-44b8-9feb-8a130596e568" (UID: "b55d9c72-26da-44b8-9feb-8a130596e568"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:30:08 crc kubenswrapper[5043]: I1125 07:30:08.647862 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b55d9c72-26da-44b8-9feb-8a130596e568-kube-api-access-8pglw" (OuterVolumeSpecName: "kube-api-access-8pglw") pod "b55d9c72-26da-44b8-9feb-8a130596e568" (UID: "b55d9c72-26da-44b8-9feb-8a130596e568"). InnerVolumeSpecName "kube-api-access-8pglw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:30:08 crc kubenswrapper[5043]: I1125 07:30:08.707951 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b55d9c72-26da-44b8-9feb-8a130596e568-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b55d9c72-26da-44b8-9feb-8a130596e568" (UID: "b55d9c72-26da-44b8-9feb-8a130596e568"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:30:08 crc kubenswrapper[5043]: I1125 07:30:08.744137 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b55d9c72-26da-44b8-9feb-8a130596e568-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:30:08 crc kubenswrapper[5043]: I1125 07:30:08.744183 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pglw\" (UniqueName: \"kubernetes.io/projected/b55d9c72-26da-44b8-9feb-8a130596e568-kube-api-access-8pglw\") on node \"crc\" DevicePath \"\"" Nov 25 07:30:08 crc kubenswrapper[5043]: I1125 07:30:08.744198 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b55d9c72-26da-44b8-9feb-8a130596e568-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:30:09 crc kubenswrapper[5043]: I1125 07:30:09.532208 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r4dn4" Nov 25 07:30:09 crc kubenswrapper[5043]: I1125 07:30:09.547342 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r4dn4"] Nov 25 07:30:09 crc kubenswrapper[5043]: I1125 07:30:09.556503 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r4dn4"] Nov 25 07:30:10 crc kubenswrapper[5043]: I1125 07:30:10.915412 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p"] Nov 25 07:30:10 crc kubenswrapper[5043]: E1125 07:30:10.916112 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5be134-f74b-42f1-b99e-7ec2690c99c4" containerName="util" Nov 25 07:30:10 crc kubenswrapper[5043]: I1125 07:30:10.916125 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5be134-f74b-42f1-b99e-7ec2690c99c4" containerName="util" Nov 25 07:30:10 crc kubenswrapper[5043]: E1125 07:30:10.916142 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b55d9c72-26da-44b8-9feb-8a130596e568" containerName="extract-utilities" Nov 25 07:30:10 crc kubenswrapper[5043]: I1125 07:30:10.916149 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="b55d9c72-26da-44b8-9feb-8a130596e568" containerName="extract-utilities" Nov 25 07:30:10 crc kubenswrapper[5043]: E1125 07:30:10.916161 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b55d9c72-26da-44b8-9feb-8a130596e568" containerName="extract-content" Nov 25 07:30:10 crc kubenswrapper[5043]: I1125 07:30:10.916168 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="b55d9c72-26da-44b8-9feb-8a130596e568" containerName="extract-content" Nov 25 07:30:10 crc kubenswrapper[5043]: E1125 07:30:10.916180 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31bedd18-64d9-4295-a081-c458536855b0" containerName="collect-profiles" Nov 25 07:30:10 crc kubenswrapper[5043]: I1125 07:30:10.916186 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="31bedd18-64d9-4295-a081-c458536855b0" containerName="collect-profiles" Nov 25 07:30:10 crc kubenswrapper[5043]: E1125 07:30:10.916199 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5be134-f74b-42f1-b99e-7ec2690c99c4" containerName="pull" Nov 25 07:30:10 crc kubenswrapper[5043]: I1125 07:30:10.916205 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5be134-f74b-42f1-b99e-7ec2690c99c4" containerName="pull" Nov 25 07:30:10 crc kubenswrapper[5043]: E1125 07:30:10.916219 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b55d9c72-26da-44b8-9feb-8a130596e568" containerName="registry-server" Nov 25 07:30:10 crc kubenswrapper[5043]: I1125 07:30:10.916224 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="b55d9c72-26da-44b8-9feb-8a130596e568" containerName="registry-server" Nov 25 07:30:10 crc kubenswrapper[5043]: E1125 07:30:10.916239 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5be134-f74b-42f1-b99e-7ec2690c99c4" containerName="extract" Nov 25 07:30:10 crc kubenswrapper[5043]: I1125 07:30:10.916244 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5be134-f74b-42f1-b99e-7ec2690c99c4" containerName="extract" Nov 25 07:30:10 crc kubenswrapper[5043]: I1125 07:30:10.916413 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee5be134-f74b-42f1-b99e-7ec2690c99c4" containerName="extract" Nov 25 07:30:10 crc kubenswrapper[5043]: I1125 07:30:10.916425 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="31bedd18-64d9-4295-a081-c458536855b0" containerName="collect-profiles" Nov 25 07:30:10 crc kubenswrapper[5043]: I1125 07:30:10.916441 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="b55d9c72-26da-44b8-9feb-8a130596e568" containerName="registry-server" Nov 25 07:30:10 crc kubenswrapper[5043]: I1125 07:30:10.917066 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p" Nov 25 07:30:10 crc kubenswrapper[5043]: I1125 07:30:10.923038 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-x6rtb" Nov 25 07:30:10 crc kubenswrapper[5043]: I1125 07:30:10.933062 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p"] Nov 25 07:30:10 crc kubenswrapper[5043]: I1125 07:30:10.970816 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b55d9c72-26da-44b8-9feb-8a130596e568" path="/var/lib/kubelet/pods/b55d9c72-26da-44b8-9feb-8a130596e568/volumes" Nov 25 07:30:10 crc kubenswrapper[5043]: I1125 07:30:10.976223 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwzjt\" (UniqueName: \"kubernetes.io/projected/30018f1d-11c7-4b61-b5a3-60b8f9848f29-kube-api-access-kwzjt\") pod \"openstack-operator-controller-operator-7b567956b5-x2t4p\" (UID: \"30018f1d-11c7-4b61-b5a3-60b8f9848f29\") " pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p" Nov 25 07:30:11 crc kubenswrapper[5043]: I1125 07:30:11.077120 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwzjt\" (UniqueName: \"kubernetes.io/projected/30018f1d-11c7-4b61-b5a3-60b8f9848f29-kube-api-access-kwzjt\") pod \"openstack-operator-controller-operator-7b567956b5-x2t4p\" (UID: \"30018f1d-11c7-4b61-b5a3-60b8f9848f29\") " pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p" Nov 25 07:30:11 crc kubenswrapper[5043]: I1125 07:30:11.097480 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwzjt\" (UniqueName: \"kubernetes.io/projected/30018f1d-11c7-4b61-b5a3-60b8f9848f29-kube-api-access-kwzjt\") pod \"openstack-operator-controller-operator-7b567956b5-x2t4p\" (UID: \"30018f1d-11c7-4b61-b5a3-60b8f9848f29\") " pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p" Nov 25 07:30:11 crc kubenswrapper[5043]: I1125 07:30:11.236457 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p" Nov 25 07:30:11 crc kubenswrapper[5043]: I1125 07:30:11.745664 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p"] Nov 25 07:30:11 crc kubenswrapper[5043]: W1125 07:30:11.748861 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30018f1d_11c7_4b61_b5a3_60b8f9848f29.slice/crio-933909587dfd06f739666d8d4857d80473c18fef854b0a195bdf50301889dbd9 WatchSource:0}: Error finding container 933909587dfd06f739666d8d4857d80473c18fef854b0a195bdf50301889dbd9: Status 404 returned error can't find the container with id 933909587dfd06f739666d8d4857d80473c18fef854b0a195bdf50301889dbd9 Nov 25 07:30:12 crc kubenswrapper[5043]: I1125 07:30:12.551590 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p" event={"ID":"30018f1d-11c7-4b61-b5a3-60b8f9848f29","Type":"ContainerStarted","Data":"933909587dfd06f739666d8d4857d80473c18fef854b0a195bdf50301889dbd9"} Nov 25 07:30:16 crc kubenswrapper[5043]: I1125 07:30:16.144141 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qdbfk"] Nov 25 07:30:16 crc kubenswrapper[5043]: I1125 07:30:16.145888 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdbfk" Nov 25 07:30:16 crc kubenswrapper[5043]: I1125 07:30:16.163453 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdbfk"] Nov 25 07:30:16 crc kubenswrapper[5043]: I1125 07:30:16.276757 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6743f109-9dad-489a-9100-5ce2635690c8-utilities\") pod \"redhat-marketplace-qdbfk\" (UID: \"6743f109-9dad-489a-9100-5ce2635690c8\") " pod="openshift-marketplace/redhat-marketplace-qdbfk" Nov 25 07:30:16 crc kubenswrapper[5043]: I1125 07:30:16.276800 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6743f109-9dad-489a-9100-5ce2635690c8-catalog-content\") pod \"redhat-marketplace-qdbfk\" (UID: \"6743f109-9dad-489a-9100-5ce2635690c8\") " pod="openshift-marketplace/redhat-marketplace-qdbfk" Nov 25 07:30:16 crc kubenswrapper[5043]: I1125 07:30:16.276830 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh8bf\" (UniqueName: \"kubernetes.io/projected/6743f109-9dad-489a-9100-5ce2635690c8-kube-api-access-dh8bf\") pod \"redhat-marketplace-qdbfk\" (UID: \"6743f109-9dad-489a-9100-5ce2635690c8\") " pod="openshift-marketplace/redhat-marketplace-qdbfk" Nov 25 07:30:16 crc kubenswrapper[5043]: I1125 07:30:16.378103 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6743f109-9dad-489a-9100-5ce2635690c8-utilities\") pod \"redhat-marketplace-qdbfk\" (UID: \"6743f109-9dad-489a-9100-5ce2635690c8\") " pod="openshift-marketplace/redhat-marketplace-qdbfk" Nov 25 07:30:16 crc kubenswrapper[5043]: I1125 07:30:16.378143 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6743f109-9dad-489a-9100-5ce2635690c8-catalog-content\") pod \"redhat-marketplace-qdbfk\" (UID: \"6743f109-9dad-489a-9100-5ce2635690c8\") " pod="openshift-marketplace/redhat-marketplace-qdbfk" Nov 25 07:30:16 crc kubenswrapper[5043]: I1125 07:30:16.378173 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh8bf\" (UniqueName: \"kubernetes.io/projected/6743f109-9dad-489a-9100-5ce2635690c8-kube-api-access-dh8bf\") pod \"redhat-marketplace-qdbfk\" (UID: \"6743f109-9dad-489a-9100-5ce2635690c8\") " pod="openshift-marketplace/redhat-marketplace-qdbfk" Nov 25 07:30:16 crc kubenswrapper[5043]: I1125 07:30:16.378953 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6743f109-9dad-489a-9100-5ce2635690c8-utilities\") pod \"redhat-marketplace-qdbfk\" (UID: \"6743f109-9dad-489a-9100-5ce2635690c8\") " pod="openshift-marketplace/redhat-marketplace-qdbfk" Nov 25 07:30:16 crc kubenswrapper[5043]: I1125 07:30:16.379136 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6743f109-9dad-489a-9100-5ce2635690c8-catalog-content\") pod \"redhat-marketplace-qdbfk\" (UID: \"6743f109-9dad-489a-9100-5ce2635690c8\") " pod="openshift-marketplace/redhat-marketplace-qdbfk" Nov 25 07:30:16 crc kubenswrapper[5043]: I1125 07:30:16.399512 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh8bf\" (UniqueName: \"kubernetes.io/projected/6743f109-9dad-489a-9100-5ce2635690c8-kube-api-access-dh8bf\") pod \"redhat-marketplace-qdbfk\" (UID: \"6743f109-9dad-489a-9100-5ce2635690c8\") " pod="openshift-marketplace/redhat-marketplace-qdbfk" Nov 25 07:30:16 crc kubenswrapper[5043]: I1125 07:30:16.475756 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdbfk" Nov 25 07:30:18 crc kubenswrapper[5043]: W1125 07:30:18.164307 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6743f109_9dad_489a_9100_5ce2635690c8.slice/crio-8a70c2d73e49d3850f0f55f080b4038cbed57b8dc56efeb49949cf59cc2c93cf WatchSource:0}: Error finding container 8a70c2d73e49d3850f0f55f080b4038cbed57b8dc56efeb49949cf59cc2c93cf: Status 404 returned error can't find the container with id 8a70c2d73e49d3850f0f55f080b4038cbed57b8dc56efeb49949cf59cc2c93cf Nov 25 07:30:18 crc kubenswrapper[5043]: I1125 07:30:18.164946 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdbfk"] Nov 25 07:30:18 crc kubenswrapper[5043]: I1125 07:30:18.585359 5043 generic.go:334] "Generic (PLEG): container finished" podID="6743f109-9dad-489a-9100-5ce2635690c8" containerID="7ccf74d86c89d098d76c009413ea77fb48117ef1c45423b941a50720b02db3d4" exitCode=0 Nov 25 07:30:18 crc kubenswrapper[5043]: I1125 07:30:18.585419 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdbfk" event={"ID":"6743f109-9dad-489a-9100-5ce2635690c8","Type":"ContainerDied","Data":"7ccf74d86c89d098d76c009413ea77fb48117ef1c45423b941a50720b02db3d4"} Nov 25 07:30:18 crc kubenswrapper[5043]: I1125 07:30:18.585831 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdbfk" event={"ID":"6743f109-9dad-489a-9100-5ce2635690c8","Type":"ContainerStarted","Data":"8a70c2d73e49d3850f0f55f080b4038cbed57b8dc56efeb49949cf59cc2c93cf"} Nov 25 07:30:18 crc kubenswrapper[5043]: I1125 07:30:18.588598 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p" event={"ID":"30018f1d-11c7-4b61-b5a3-60b8f9848f29","Type":"ContainerStarted","Data":"aa63945023a730072548b5ce371eec5b064d761ffe23644a59b69c652b675ec3"} Nov 25 07:30:18 crc kubenswrapper[5043]: I1125 07:30:18.588824 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p" Nov 25 07:30:18 crc kubenswrapper[5043]: I1125 07:30:18.637730 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p" podStartSLOduration=2.5605994279999997 podStartE2EDuration="8.637703085s" podCreationTimestamp="2025-11-25 07:30:10 +0000 UTC" firstStartedPulling="2025-11-25 07:30:11.751282388 +0000 UTC m=+875.919478099" lastFinishedPulling="2025-11-25 07:30:17.828386025 +0000 UTC m=+881.996581756" observedRunningTime="2025-11-25 07:30:18.634059737 +0000 UTC m=+882.802255458" watchObservedRunningTime="2025-11-25 07:30:18.637703085 +0000 UTC m=+882.805898816" Nov 25 07:30:19 crc kubenswrapper[5043]: I1125 07:30:19.596464 5043 generic.go:334] "Generic (PLEG): container finished" podID="6743f109-9dad-489a-9100-5ce2635690c8" containerID="5d6a4fa425c61d41afd9fcb3ae3845ef34bd54ac6505cc1f733dd135f91bb3f7" exitCode=0 Nov 25 07:30:19 crc kubenswrapper[5043]: I1125 07:30:19.596502 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdbfk" event={"ID":"6743f109-9dad-489a-9100-5ce2635690c8","Type":"ContainerDied","Data":"5d6a4fa425c61d41afd9fcb3ae3845ef34bd54ac6505cc1f733dd135f91bb3f7"} Nov 25 07:30:21 crc kubenswrapper[5043]: I1125 07:30:21.613533 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdbfk" event={"ID":"6743f109-9dad-489a-9100-5ce2635690c8","Type":"ContainerStarted","Data":"0cee9f6e13b9a904bf4465ef36ac020cb1538c2d1e0b71e7525712576182e45f"} Nov 25 07:30:21 crc kubenswrapper[5043]: I1125 07:30:21.633198 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qdbfk" podStartSLOduration=3.193937431 podStartE2EDuration="5.633184103s" podCreationTimestamp="2025-11-25 07:30:16 +0000 UTC" firstStartedPulling="2025-11-25 07:30:18.586474821 +0000 UTC m=+882.754670542" lastFinishedPulling="2025-11-25 07:30:21.025721493 +0000 UTC m=+885.193917214" observedRunningTime="2025-11-25 07:30:21.629956347 +0000 UTC m=+885.798152078" watchObservedRunningTime="2025-11-25 07:30:21.633184103 +0000 UTC m=+885.801379824" Nov 25 07:30:26 crc kubenswrapper[5043]: I1125 07:30:26.476672 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qdbfk" Nov 25 07:30:26 crc kubenswrapper[5043]: I1125 07:30:26.477417 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qdbfk" Nov 25 07:30:26 crc kubenswrapper[5043]: I1125 07:30:26.533920 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qdbfk" Nov 25 07:30:26 crc kubenswrapper[5043]: I1125 07:30:26.687923 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qdbfk" Nov 25 07:30:29 crc kubenswrapper[5043]: I1125 07:30:29.539578 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdbfk"] Nov 25 07:30:29 crc kubenswrapper[5043]: I1125 07:30:29.540255 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qdbfk" podUID="6743f109-9dad-489a-9100-5ce2635690c8" containerName="registry-server" containerID="cri-o://0cee9f6e13b9a904bf4465ef36ac020cb1538c2d1e0b71e7525712576182e45f" gracePeriod=2 Nov 25 07:30:29 crc kubenswrapper[5043]: I1125 07:30:29.668837 5043 generic.go:334] "Generic (PLEG): container finished" podID="6743f109-9dad-489a-9100-5ce2635690c8" containerID="0cee9f6e13b9a904bf4465ef36ac020cb1538c2d1e0b71e7525712576182e45f" exitCode=0 Nov 25 07:30:29 crc kubenswrapper[5043]: I1125 07:30:29.668883 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdbfk" event={"ID":"6743f109-9dad-489a-9100-5ce2635690c8","Type":"ContainerDied","Data":"0cee9f6e13b9a904bf4465ef36ac020cb1538c2d1e0b71e7525712576182e45f"} Nov 25 07:30:29 crc kubenswrapper[5043]: I1125 07:30:29.909518 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdbfk" Nov 25 07:30:30 crc kubenswrapper[5043]: I1125 07:30:30.062965 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh8bf\" (UniqueName: \"kubernetes.io/projected/6743f109-9dad-489a-9100-5ce2635690c8-kube-api-access-dh8bf\") pod \"6743f109-9dad-489a-9100-5ce2635690c8\" (UID: \"6743f109-9dad-489a-9100-5ce2635690c8\") " Nov 25 07:30:30 crc kubenswrapper[5043]: I1125 07:30:30.063081 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6743f109-9dad-489a-9100-5ce2635690c8-utilities\") pod \"6743f109-9dad-489a-9100-5ce2635690c8\" (UID: \"6743f109-9dad-489a-9100-5ce2635690c8\") " Nov 25 07:30:30 crc kubenswrapper[5043]: I1125 07:30:30.063159 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6743f109-9dad-489a-9100-5ce2635690c8-catalog-content\") pod \"6743f109-9dad-489a-9100-5ce2635690c8\" (UID: \"6743f109-9dad-489a-9100-5ce2635690c8\") " Nov 25 07:30:30 crc kubenswrapper[5043]: I1125 07:30:30.064566 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6743f109-9dad-489a-9100-5ce2635690c8-utilities" (OuterVolumeSpecName: "utilities") pod "6743f109-9dad-489a-9100-5ce2635690c8" (UID: "6743f109-9dad-489a-9100-5ce2635690c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:30:30 crc kubenswrapper[5043]: I1125 07:30:30.072916 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6743f109-9dad-489a-9100-5ce2635690c8-kube-api-access-dh8bf" (OuterVolumeSpecName: "kube-api-access-dh8bf") pod "6743f109-9dad-489a-9100-5ce2635690c8" (UID: "6743f109-9dad-489a-9100-5ce2635690c8"). InnerVolumeSpecName "kube-api-access-dh8bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:30:30 crc kubenswrapper[5043]: I1125 07:30:30.087025 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6743f109-9dad-489a-9100-5ce2635690c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6743f109-9dad-489a-9100-5ce2635690c8" (UID: "6743f109-9dad-489a-9100-5ce2635690c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:30:30 crc kubenswrapper[5043]: I1125 07:30:30.165231 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh8bf\" (UniqueName: \"kubernetes.io/projected/6743f109-9dad-489a-9100-5ce2635690c8-kube-api-access-dh8bf\") on node \"crc\" DevicePath \"\"" Nov 25 07:30:30 crc kubenswrapper[5043]: I1125 07:30:30.165265 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6743f109-9dad-489a-9100-5ce2635690c8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:30:30 crc kubenswrapper[5043]: I1125 07:30:30.165274 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6743f109-9dad-489a-9100-5ce2635690c8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:30:30 crc kubenswrapper[5043]: I1125 07:30:30.678678 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdbfk" event={"ID":"6743f109-9dad-489a-9100-5ce2635690c8","Type":"ContainerDied","Data":"8a70c2d73e49d3850f0f55f080b4038cbed57b8dc56efeb49949cf59cc2c93cf"} Nov 25 07:30:30 crc kubenswrapper[5043]: I1125 07:30:30.678733 5043 scope.go:117] "RemoveContainer" containerID="0cee9f6e13b9a904bf4465ef36ac020cb1538c2d1e0b71e7525712576182e45f" Nov 25 07:30:30 crc kubenswrapper[5043]: I1125 07:30:30.678856 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdbfk" Nov 25 07:30:30 crc kubenswrapper[5043]: I1125 07:30:30.702481 5043 scope.go:117] "RemoveContainer" containerID="5d6a4fa425c61d41afd9fcb3ae3845ef34bd54ac6505cc1f733dd135f91bb3f7" Nov 25 07:30:30 crc kubenswrapper[5043]: I1125 07:30:30.717545 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdbfk"] Nov 25 07:30:30 crc kubenswrapper[5043]: I1125 07:30:30.726483 5043 scope.go:117] "RemoveContainer" containerID="7ccf74d86c89d098d76c009413ea77fb48117ef1c45423b941a50720b02db3d4" Nov 25 07:30:30 crc kubenswrapper[5043]: I1125 07:30:30.733183 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdbfk"] Nov 25 07:30:30 crc kubenswrapper[5043]: I1125 07:30:30.972536 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6743f109-9dad-489a-9100-5ce2635690c8" path="/var/lib/kubelet/pods/6743f109-9dad-489a-9100-5ce2635690c8/volumes" Nov 25 07:30:31 crc kubenswrapper[5043]: I1125 07:30:31.239260 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p" Nov 25 07:30:39 crc kubenswrapper[5043]: I1125 07:30:39.008914 5043 scope.go:117] "RemoveContainer" containerID="21610d1b342225282b8db352fed7d8fe60dbee6006cb819a7b988c296c342209" Nov 25 07:30:39 crc kubenswrapper[5043]: I1125 07:30:39.033973 5043 scope.go:117] "RemoveContainer" containerID="c3d407a6dd9def2a149ff57ec8b25a5b92e69a10ede2b62d4dd9c680fcd6768c" Nov 25 07:30:39 crc kubenswrapper[5043]: I1125 07:30:39.056178 5043 scope.go:117] "RemoveContainer" containerID="68007c185cc00e04146122ec3d2471029b2c41e146671ae29663cdd1fbaa396f" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.245189 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4"] Nov 25 07:30:55 crc kubenswrapper[5043]: E1125 07:30:55.246105 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6743f109-9dad-489a-9100-5ce2635690c8" containerName="extract-utilities" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.246122 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="6743f109-9dad-489a-9100-5ce2635690c8" containerName="extract-utilities" Nov 25 07:30:55 crc kubenswrapper[5043]: E1125 07:30:55.246141 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6743f109-9dad-489a-9100-5ce2635690c8" containerName="extract-content" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.246149 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="6743f109-9dad-489a-9100-5ce2635690c8" containerName="extract-content" Nov 25 07:30:55 crc kubenswrapper[5043]: E1125 07:30:55.246163 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6743f109-9dad-489a-9100-5ce2635690c8" containerName="registry-server" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.246172 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="6743f109-9dad-489a-9100-5ce2635690c8" containerName="registry-server" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.246329 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="6743f109-9dad-489a-9100-5ce2635690c8" containerName="registry-server" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.247100 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.251813 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-86tg6" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.255994 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.257281 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.259230 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.259653 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-2qjzv" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.274567 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.291787 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.292915 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.298071 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-ckvz7" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.307937 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.308955 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.321709 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.330151 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-89lz4" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.352656 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-l77gb"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.353630 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.355224 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-lwhxq" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.356167 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-l77gb"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.368511 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.369850 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.371007 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.378679 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.379010 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-vvvtn" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.379353 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55579\" (UniqueName: \"kubernetes.io/projected/cdc9a1bf-b6d9-4a36-bcf8-55f87525da45-kube-api-access-55579\") pod \"cinder-operator-controller-manager-79856dc55c-pnq4k\" (UID: \"cdc9a1bf-b6d9-4a36-bcf8-55f87525da45\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.386520 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2wp8\" (UniqueName: \"kubernetes.io/projected/e5c62587-28b4-4a1e-8b73-ee9624ca7163-kube-api-access-h2wp8\") pod \"glance-operator-controller-manager-68b95954c9-nnpzz\" (UID: \"e5c62587-28b4-4a1e-8b73-ee9624ca7163\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.386709 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pj54\" (UniqueName: \"kubernetes.io/projected/e020a857-3730-44f5-8e98-3e59868fbde6-kube-api-access-9pj54\") pod \"designate-operator-controller-manager-7d695c9b56-5mp5h\" (UID: \"e020a857-3730-44f5-8e98-3e59868fbde6\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.386807 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8snvc\" (UniqueName: \"kubernetes.io/projected/d9a368e6-f4bb-4896-9a2d-f7ceed65e933-kube-api-access-8snvc\") pod \"barbican-operator-controller-manager-86dc4d89c8-dtcj4\" (UID: \"d9a368e6-f4bb-4896-9a2d-f7ceed65e933\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.441322 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.444889 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.447876 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-svd2g" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.450182 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.451392 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.455708 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.457006 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.458928 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.459183 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-kfv9l" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.470004 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.470073 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-mgfgz" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.474671 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.483749 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.484780 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.487575 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55579\" (UniqueName: \"kubernetes.io/projected/cdc9a1bf-b6d9-4a36-bcf8-55f87525da45-kube-api-access-55579\") pod \"cinder-operator-controller-manager-79856dc55c-pnq4k\" (UID: \"cdc9a1bf-b6d9-4a36-bcf8-55f87525da45\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.487642 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2wp8\" (UniqueName: \"kubernetes.io/projected/e5c62587-28b4-4a1e-8b73-ee9624ca7163-kube-api-access-h2wp8\") pod \"glance-operator-controller-manager-68b95954c9-nnpzz\" (UID: \"e5c62587-28b4-4a1e-8b73-ee9624ca7163\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.487664 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pj54\" (UniqueName: \"kubernetes.io/projected/e020a857-3730-44f5-8e98-3e59868fbde6-kube-api-access-9pj54\") pod \"designate-operator-controller-manager-7d695c9b56-5mp5h\" (UID: \"e020a857-3730-44f5-8e98-3e59868fbde6\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.487692 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7jwt\" (UniqueName: \"kubernetes.io/projected/8a93d5b1-742c-4a37-94ef-a60ffb008520-kube-api-access-v7jwt\") pod \"heat-operator-controller-manager-774b86978c-l77gb\" (UID: \"8a93d5b1-742c-4a37-94ef-a60ffb008520\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.487721 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8snvc\" (UniqueName: \"kubernetes.io/projected/d9a368e6-f4bb-4896-9a2d-f7ceed65e933-kube-api-access-8snvc\") pod \"barbican-operator-controller-manager-86dc4d89c8-dtcj4\" (UID: \"d9a368e6-f4bb-4896-9a2d-f7ceed65e933\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.487748 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsz44\" (UniqueName: \"kubernetes.io/projected/c20803a7-e9a9-441a-9e61-84673f3c02e8-kube-api-access-gsz44\") pod \"horizon-operator-controller-manager-68c9694994-wmkmw\" (UID: \"c20803a7-e9a9-441a-9e61-84673f3c02e8\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.488394 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-4n5v4" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.497978 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.515660 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pj54\" (UniqueName: \"kubernetes.io/projected/e020a857-3730-44f5-8e98-3e59868fbde6-kube-api-access-9pj54\") pod \"designate-operator-controller-manager-7d695c9b56-5mp5h\" (UID: \"e020a857-3730-44f5-8e98-3e59868fbde6\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.519414 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.523711 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.524233 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.524947 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8snvc\" (UniqueName: \"kubernetes.io/projected/d9a368e6-f4bb-4896-9a2d-f7ceed65e933-kube-api-access-8snvc\") pod \"barbican-operator-controller-manager-86dc4d89c8-dtcj4\" (UID: \"d9a368e6-f4bb-4896-9a2d-f7ceed65e933\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.525413 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55579\" (UniqueName: \"kubernetes.io/projected/cdc9a1bf-b6d9-4a36-bcf8-55f87525da45-kube-api-access-55579\") pod \"cinder-operator-controller-manager-79856dc55c-pnq4k\" (UID: \"cdc9a1bf-b6d9-4a36-bcf8-55f87525da45\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.525906 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-zqjvj" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.529267 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2wp8\" (UniqueName: \"kubernetes.io/projected/e5c62587-28b4-4a1e-8b73-ee9624ca7163-kube-api-access-h2wp8\") pod \"glance-operator-controller-manager-68b95954c9-nnpzz\" (UID: \"e5c62587-28b4-4a1e-8b73-ee9624ca7163\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.549161 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.567830 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.569122 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.573189 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-vd46s" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.573460 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.581678 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.583090 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.584586 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2hkbq" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.590033 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.592458 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7jwt\" (UniqueName: \"kubernetes.io/projected/8a93d5b1-742c-4a37-94ef-a60ffb008520-kube-api-access-v7jwt\") pod \"heat-operator-controller-manager-774b86978c-l77gb\" (UID: \"8a93d5b1-742c-4a37-94ef-a60ffb008520\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.592502 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mpt4\" (UniqueName: \"kubernetes.io/projected/92e57762-522f-4a9d-8b03-732ba4dad5c1-kube-api-access-7mpt4\") pod \"infra-operator-controller-manager-d5cc86f4b-x8q8x\" (UID: \"92e57762-522f-4a9d-8b03-732ba4dad5c1\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.592547 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwqfg\" (UniqueName: \"kubernetes.io/projected/b7005e58-64d2-470b-a3e7-22b67b7fbfb3-kube-api-access-jwqfg\") pod \"ironic-operator-controller-manager-5bfcdc958c-sgz96\" (UID: \"b7005e58-64d2-470b-a3e7-22b67b7fbfb3\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.592590 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsz44\" (UniqueName: \"kubernetes.io/projected/c20803a7-e9a9-441a-9e61-84673f3c02e8-kube-api-access-gsz44\") pod \"horizon-operator-controller-manager-68c9694994-wmkmw\" (UID: \"c20803a7-e9a9-441a-9e61-84673f3c02e8\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.592645 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsdml\" (UniqueName: \"kubernetes.io/projected/9c9e4471-0205-478a-8717-be36a19d2a02-kube-api-access-hsdml\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-tdzr2\" (UID: \"9c9e4471-0205-478a-8717-be36a19d2a02\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.592682 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/92e57762-522f-4a9d-8b03-732ba4dad5c1-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-x8q8x\" (UID: \"92e57762-522f-4a9d-8b03-732ba4dad5c1\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.592724 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct5xm\" (UniqueName: \"kubernetes.io/projected/c924fa47-53fb-4edc-8214-667ba1858ca2-kube-api-access-ct5xm\") pod \"manila-operator-controller-manager-58bb8d67cc-xx8rb\" (UID: \"c924fa47-53fb-4edc-8214-667ba1858ca2\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.592748 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgh7x\" (UniqueName: \"kubernetes.io/projected/ff874d31-8e5a-4c0b-8f9c-e63513a00483-kube-api-access-hgh7x\") pod \"keystone-operator-controller-manager-748dc6576f-gvwj8\" (UID: \"ff874d31-8e5a-4c0b-8f9c-e63513a00483\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.622380 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7jwt\" (UniqueName: \"kubernetes.io/projected/8a93d5b1-742c-4a37-94ef-a60ffb008520-kube-api-access-v7jwt\") pod \"heat-operator-controller-manager-774b86978c-l77gb\" (UID: \"8a93d5b1-742c-4a37-94ef-a60ffb008520\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.622659 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.624070 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.626704 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.627082 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-kxbhn" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.632037 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.642479 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsz44\" (UniqueName: \"kubernetes.io/projected/c20803a7-e9a9-441a-9e61-84673f3c02e8-kube-api-access-gsz44\") pod \"horizon-operator-controller-manager-68c9694994-wmkmw\" (UID: \"c20803a7-e9a9-441a-9e61-84673f3c02e8\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.647856 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.654434 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.679895 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.686546 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.686563 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.687947 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.690853 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-ngscn" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.692094 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.693759 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mpt4\" (UniqueName: \"kubernetes.io/projected/92e57762-522f-4a9d-8b03-732ba4dad5c1-kube-api-access-7mpt4\") pod \"infra-operator-controller-manager-d5cc86f4b-x8q8x\" (UID: \"92e57762-522f-4a9d-8b03-732ba4dad5c1\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.693801 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwndf\" (UniqueName: \"kubernetes.io/projected/020c7247-0b68-419b-b97f-f7b0ea800142-kube-api-access-qwndf\") pod \"octavia-operator-controller-manager-fd75fd47d-dxd2x\" (UID: \"020c7247-0b68-419b-b97f-f7b0ea800142\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.693833 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwqfg\" (UniqueName: \"kubernetes.io/projected/b7005e58-64d2-470b-a3e7-22b67b7fbfb3-kube-api-access-jwqfg\") pod \"ironic-operator-controller-manager-5bfcdc958c-sgz96\" (UID: \"b7005e58-64d2-470b-a3e7-22b67b7fbfb3\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.693879 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsdml\" (UniqueName: \"kubernetes.io/projected/9c9e4471-0205-478a-8717-be36a19d2a02-kube-api-access-hsdml\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-tdzr2\" (UID: \"9c9e4471-0205-478a-8717-be36a19d2a02\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.693919 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/92e57762-522f-4a9d-8b03-732ba4dad5c1-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-x8q8x\" (UID: \"92e57762-522f-4a9d-8b03-732ba4dad5c1\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.694000 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct5xm\" (UniqueName: \"kubernetes.io/projected/c924fa47-53fb-4edc-8214-667ba1858ca2-kube-api-access-ct5xm\") pod \"manila-operator-controller-manager-58bb8d67cc-xx8rb\" (UID: \"c924fa47-53fb-4edc-8214-667ba1858ca2\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.694030 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjfg9\" (UniqueName: \"kubernetes.io/projected/bb800a2f-1864-47be-931b-7b99f7c7354f-kube-api-access-mjfg9\") pod \"neutron-operator-controller-manager-7c57c8bbc4-l5vz2\" (UID: \"bb800a2f-1864-47be-931b-7b99f7c7354f\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.694061 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgh7x\" (UniqueName: \"kubernetes.io/projected/ff874d31-8e5a-4c0b-8f9c-e63513a00483-kube-api-access-hgh7x\") pod \"keystone-operator-controller-manager-748dc6576f-gvwj8\" (UID: \"ff874d31-8e5a-4c0b-8f9c-e63513a00483\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.694104 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wkk9\" (UniqueName: \"kubernetes.io/projected/a3d7b5dc-2ced-4ac6-bdad-cd86342616a8-kube-api-access-7wkk9\") pod \"nova-operator-controller-manager-79556f57fc-m9bmz\" (UID: \"a3d7b5dc-2ced-4ac6-bdad-cd86342616a8\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.694148 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" Nov 25 07:30:55 crc kubenswrapper[5043]: E1125 07:30:55.694316 5043 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 07:30:55 crc kubenswrapper[5043]: E1125 07:30:55.694424 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92e57762-522f-4a9d-8b03-732ba4dad5c1-cert podName:92e57762-522f-4a9d-8b03-732ba4dad5c1 nodeName:}" failed. No retries permitted until 2025-11-25 07:30:56.194386101 +0000 UTC m=+920.362581822 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/92e57762-522f-4a9d-8b03-732ba4dad5c1-cert") pod "infra-operator-controller-manager-d5cc86f4b-x8q8x" (UID: "92e57762-522f-4a9d-8b03-732ba4dad5c1") : secret "infra-operator-webhook-server-cert" not found Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.698391 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.699844 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.709443 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.718537 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.720821 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.721487 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-47p9v" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.723377 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-wq2z6" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.731212 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mpt4\" (UniqueName: \"kubernetes.io/projected/92e57762-522f-4a9d-8b03-732ba4dad5c1-kube-api-access-7mpt4\") pod \"infra-operator-controller-manager-d5cc86f4b-x8q8x\" (UID: \"92e57762-522f-4a9d-8b03-732ba4dad5c1\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.733637 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgh7x\" (UniqueName: \"kubernetes.io/projected/ff874d31-8e5a-4c0b-8f9c-e63513a00483-kube-api-access-hgh7x\") pod \"keystone-operator-controller-manager-748dc6576f-gvwj8\" (UID: \"ff874d31-8e5a-4c0b-8f9c-e63513a00483\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.733959 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwqfg\" (UniqueName: \"kubernetes.io/projected/b7005e58-64d2-470b-a3e7-22b67b7fbfb3-kube-api-access-jwqfg\") pod \"ironic-operator-controller-manager-5bfcdc958c-sgz96\" (UID: \"b7005e58-64d2-470b-a3e7-22b67b7fbfb3\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.739063 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct5xm\" (UniqueName: \"kubernetes.io/projected/c924fa47-53fb-4edc-8214-667ba1858ca2-kube-api-access-ct5xm\") pod \"manila-operator-controller-manager-58bb8d67cc-xx8rb\" (UID: \"c924fa47-53fb-4edc-8214-667ba1858ca2\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.744029 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsdml\" (UniqueName: \"kubernetes.io/projected/9c9e4471-0205-478a-8717-be36a19d2a02-kube-api-access-hsdml\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-tdzr2\" (UID: \"9c9e4471-0205-478a-8717-be36a19d2a02\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.756648 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.765712 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.767733 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.772093 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7tlqz" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.772556 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.772860 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.795915 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg7tt\" (UniqueName: \"kubernetes.io/projected/c0627b3a-26de-453c-ab7f-de79dae6c2fc-kube-api-access-mg7tt\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-h9jgk\" (UID: \"c0627b3a-26de-453c-ab7f-de79dae6c2fc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.795977 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwndf\" (UniqueName: \"kubernetes.io/projected/020c7247-0b68-419b-b97f-f7b0ea800142-kube-api-access-qwndf\") pod \"octavia-operator-controller-manager-fd75fd47d-dxd2x\" (UID: \"020c7247-0b68-419b-b97f-f7b0ea800142\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.796011 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2977\" (UniqueName: \"kubernetes.io/projected/869f93a1-d6e7-46ff-a60f-0e997412a2fa-kube-api-access-s2977\") pod \"placement-operator-controller-manager-5db546f9d9-6w2db\" (UID: \"869f93a1-d6e7-46ff-a60f-0e997412a2fa\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.796116 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjfg9\" (UniqueName: \"kubernetes.io/projected/bb800a2f-1864-47be-931b-7b99f7c7354f-kube-api-access-mjfg9\") pod \"neutron-operator-controller-manager-7c57c8bbc4-l5vz2\" (UID: \"bb800a2f-1864-47be-931b-7b99f7c7354f\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.796157 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84ccs\" (UniqueName: \"kubernetes.io/projected/d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2-kube-api-access-84ccs\") pod \"ovn-operator-controller-manager-66cf5c67ff-d5ffq\" (UID: \"d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.796180 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0627b3a-26de-453c-ab7f-de79dae6c2fc-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-h9jgk\" (UID: \"c0627b3a-26de-453c-ab7f-de79dae6c2fc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.796209 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wkk9\" (UniqueName: \"kubernetes.io/projected/a3d7b5dc-2ced-4ac6-bdad-cd86342616a8-kube-api-access-7wkk9\") pod \"nova-operator-controller-manager-79556f57fc-m9bmz\" (UID: \"a3d7b5dc-2ced-4ac6-bdad-cd86342616a8\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.798634 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.806781 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.808476 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.818835 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.820832 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.822446 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-x47lt" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.844528 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjfg9\" (UniqueName: \"kubernetes.io/projected/bb800a2f-1864-47be-931b-7b99f7c7354f-kube-api-access-mjfg9\") pod \"neutron-operator-controller-manager-7c57c8bbc4-l5vz2\" (UID: \"bb800a2f-1864-47be-931b-7b99f7c7354f\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.857434 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.862141 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwndf\" (UniqueName: \"kubernetes.io/projected/020c7247-0b68-419b-b97f-f7b0ea800142-kube-api-access-qwndf\") pod \"octavia-operator-controller-manager-fd75fd47d-dxd2x\" (UID: \"020c7247-0b68-419b-b97f-f7b0ea800142\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.866447 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wkk9\" (UniqueName: \"kubernetes.io/projected/a3d7b5dc-2ced-4ac6-bdad-cd86342616a8-kube-api-access-7wkk9\") pod \"nova-operator-controller-manager-79556f57fc-m9bmz\" (UID: \"a3d7b5dc-2ced-4ac6-bdad-cd86342616a8\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.884705 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.886065 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.888114 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-wfqxk" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.893395 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.898169 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.899232 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2977\" (UniqueName: \"kubernetes.io/projected/869f93a1-d6e7-46ff-a60f-0e997412a2fa-kube-api-access-s2977\") pod \"placement-operator-controller-manager-5db546f9d9-6w2db\" (UID: \"869f93a1-d6e7-46ff-a60f-0e997412a2fa\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.899350 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgvxt\" (UniqueName: \"kubernetes.io/projected/d643e47d-246d-4551-a63c-9b9374e684b2-kube-api-access-rgvxt\") pod \"telemetry-operator-controller-manager-567f98c9d-mk7wm\" (UID: \"d643e47d-246d-4551-a63c-9b9374e684b2\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.899406 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84ccs\" (UniqueName: \"kubernetes.io/projected/d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2-kube-api-access-84ccs\") pod \"ovn-operator-controller-manager-66cf5c67ff-d5ffq\" (UID: \"d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.899426 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0627b3a-26de-453c-ab7f-de79dae6c2fc-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-h9jgk\" (UID: \"c0627b3a-26de-453c-ab7f-de79dae6c2fc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.899445 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8xvs\" (UniqueName: \"kubernetes.io/projected/8ea2c827-762f-437d-ad30-a3568d7a4af1-kube-api-access-g8xvs\") pod \"swift-operator-controller-manager-6fdc4fcf86-gmcsx\" (UID: \"8ea2c827-762f-437d-ad30-a3568d7a4af1\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.899479 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg7tt\" (UniqueName: \"kubernetes.io/projected/c0627b3a-26de-453c-ab7f-de79dae6c2fc-kube-api-access-mg7tt\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-h9jgk\" (UID: \"c0627b3a-26de-453c-ab7f-de79dae6c2fc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" Nov 25 07:30:55 crc kubenswrapper[5043]: E1125 07:30:55.899593 5043 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 07:30:55 crc kubenswrapper[5043]: E1125 07:30:55.899651 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0627b3a-26de-453c-ab7f-de79dae6c2fc-cert podName:c0627b3a-26de-453c-ab7f-de79dae6c2fc nodeName:}" failed. No retries permitted until 2025-11-25 07:30:56.399637332 +0000 UTC m=+920.567833053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0627b3a-26de-453c-ab7f-de79dae6c2fc-cert") pod "openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" (UID: "c0627b3a-26de-453c-ab7f-de79dae6c2fc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.922333 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-54g5x"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.923669 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.925913 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-54g5x"] Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.928244 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-76vjz" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.951031 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2977\" (UniqueName: \"kubernetes.io/projected/869f93a1-d6e7-46ff-a60f-0e997412a2fa-kube-api-access-s2977\") pod \"placement-operator-controller-manager-5db546f9d9-6w2db\" (UID: \"869f93a1-d6e7-46ff-a60f-0e997412a2fa\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.983510 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84ccs\" (UniqueName: \"kubernetes.io/projected/d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2-kube-api-access-84ccs\") pod \"ovn-operator-controller-manager-66cf5c67ff-d5ffq\" (UID: \"d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.985208 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg7tt\" (UniqueName: \"kubernetes.io/projected/c0627b3a-26de-453c-ab7f-de79dae6c2fc-kube-api-access-mg7tt\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-h9jgk\" (UID: \"c0627b3a-26de-453c-ab7f-de79dae6c2fc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" Nov 25 07:30:55 crc kubenswrapper[5043]: I1125 07:30:55.985678 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.001577 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d97q2\" (UniqueName: \"kubernetes.io/projected/01b420eb-b6d7-4534-9bdb-967c7a7c163f-kube-api-access-d97q2\") pod \"test-operator-controller-manager-5cb74df96-vqd9c\" (UID: \"01b420eb-b6d7-4534-9bdb-967c7a7c163f\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.001691 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgvxt\" (UniqueName: \"kubernetes.io/projected/d643e47d-246d-4551-a63c-9b9374e684b2-kube-api-access-rgvxt\") pod \"telemetry-operator-controller-manager-567f98c9d-mk7wm\" (UID: \"d643e47d-246d-4551-a63c-9b9374e684b2\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.001723 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q7cm\" (UniqueName: \"kubernetes.io/projected/17e00d26-c8ad-4dfd-90df-8705b2cb2bde-kube-api-access-7q7cm\") pod \"watcher-operator-controller-manager-864885998-54g5x\" (UID: \"17e00d26-c8ad-4dfd-90df-8705b2cb2bde\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.001783 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8xvs\" (UniqueName: \"kubernetes.io/projected/8ea2c827-762f-437d-ad30-a3568d7a4af1-kube-api-access-g8xvs\") pod \"swift-operator-controller-manager-6fdc4fcf86-gmcsx\" (UID: \"8ea2c827-762f-437d-ad30-a3568d7a4af1\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.003954 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.022593 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8xvs\" (UniqueName: \"kubernetes.io/projected/8ea2c827-762f-437d-ad30-a3568d7a4af1-kube-api-access-g8xvs\") pod \"swift-operator-controller-manager-6fdc4fcf86-gmcsx\" (UID: \"8ea2c827-762f-437d-ad30-a3568d7a4af1\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.045553 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.046368 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgvxt\" (UniqueName: \"kubernetes.io/projected/d643e47d-246d-4551-a63c-9b9374e684b2-kube-api-access-rgvxt\") pod \"telemetry-operator-controller-manager-567f98c9d-mk7wm\" (UID: \"d643e47d-246d-4551-a63c-9b9374e684b2\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.053518 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.096224 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.102722 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q7cm\" (UniqueName: \"kubernetes.io/projected/17e00d26-c8ad-4dfd-90df-8705b2cb2bde-kube-api-access-7q7cm\") pod \"watcher-operator-controller-manager-864885998-54g5x\" (UID: \"17e00d26-c8ad-4dfd-90df-8705b2cb2bde\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.102967 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d97q2\" (UniqueName: \"kubernetes.io/projected/01b420eb-b6d7-4534-9bdb-967c7a7c163f-kube-api-access-d97q2\") pod \"test-operator-controller-manager-5cb74df96-vqd9c\" (UID: \"01b420eb-b6d7-4534-9bdb-967c7a7c163f\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.118795 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz"] Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.120382 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.122459 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.125076 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.135125 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-j47rd" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.136105 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.141558 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d97q2\" (UniqueName: \"kubernetes.io/projected/01b420eb-b6d7-4534-9bdb-967c7a7c163f-kube-api-access-d97q2\") pod \"test-operator-controller-manager-5cb74df96-vqd9c\" (UID: \"01b420eb-b6d7-4534-9bdb-967c7a7c163f\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.144498 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q7cm\" (UniqueName: \"kubernetes.io/projected/17e00d26-c8ad-4dfd-90df-8705b2cb2bde-kube-api-access-7q7cm\") pod \"watcher-operator-controller-manager-864885998-54g5x\" (UID: \"17e00d26-c8ad-4dfd-90df-8705b2cb2bde\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.149881 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz"] Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.159964 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.171011 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.193219 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.215646 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7np4\" (UniqueName: \"kubernetes.io/projected/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-kube-api-access-n7np4\") pod \"openstack-operator-controller-manager-7cd5954d9-5zklz\" (UID: \"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.215710 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-5zklz\" (UID: \"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.215947 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/92e57762-522f-4a9d-8b03-732ba4dad5c1-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-x8q8x\" (UID: \"92e57762-522f-4a9d-8b03-732ba4dad5c1\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.216029 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-5zklz\" (UID: \"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.221778 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/92e57762-522f-4a9d-8b03-732ba4dad5c1-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-x8q8x\" (UID: \"92e57762-522f-4a9d-8b03-732ba4dad5c1\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.225057 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr"] Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.226708 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.232716 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr"] Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.237243 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-dp6xt" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.317563 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-5zklz\" (UID: \"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.317650 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vsz7\" (UniqueName: \"kubernetes.io/projected/6411a018-19de-4fba-bf72-6dfd5bd2ce29-kube-api-access-8vsz7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fmplr\" (UID: \"6411a018-19de-4fba-bf72-6dfd5bd2ce29\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.317677 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7np4\" (UniqueName: \"kubernetes.io/projected/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-kube-api-access-n7np4\") pod \"openstack-operator-controller-manager-7cd5954d9-5zklz\" (UID: \"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.317719 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-5zklz\" (UID: \"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:30:56 crc kubenswrapper[5043]: E1125 07:30:56.317908 5043 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 07:30:56 crc kubenswrapper[5043]: E1125 07:30:56.317983 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-webhook-certs podName:f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4 nodeName:}" failed. No retries permitted until 2025-11-25 07:30:56.817963393 +0000 UTC m=+920.986159114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-webhook-certs") pod "openstack-operator-controller-manager-7cd5954d9-5zklz" (UID: "f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4") : secret "webhook-server-cert" not found Nov 25 07:30:56 crc kubenswrapper[5043]: E1125 07:30:56.318300 5043 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 07:30:56 crc kubenswrapper[5043]: E1125 07:30:56.318350 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-metrics-certs podName:f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4 nodeName:}" failed. No retries permitted until 2025-11-25 07:30:56.818336293 +0000 UTC m=+920.986532014 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-metrics-certs") pod "openstack-operator-controller-manager-7cd5954d9-5zklz" (UID: "f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4") : secret "metrics-server-cert" not found Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.338525 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7np4\" (UniqueName: \"kubernetes.io/projected/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-kube-api-access-n7np4\") pod \"openstack-operator-controller-manager-7cd5954d9-5zklz\" (UID: \"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.386497 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.419654 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vsz7\" (UniqueName: \"kubernetes.io/projected/6411a018-19de-4fba-bf72-6dfd5bd2ce29-kube-api-access-8vsz7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fmplr\" (UID: \"6411a018-19de-4fba-bf72-6dfd5bd2ce29\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.419871 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0627b3a-26de-453c-ab7f-de79dae6c2fc-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-h9jgk\" (UID: \"c0627b3a-26de-453c-ab7f-de79dae6c2fc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" Nov 25 07:30:56 crc kubenswrapper[5043]: E1125 07:30:56.420033 5043 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 07:30:56 crc kubenswrapper[5043]: E1125 07:30:56.420090 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0627b3a-26de-453c-ab7f-de79dae6c2fc-cert podName:c0627b3a-26de-453c-ab7f-de79dae6c2fc nodeName:}" failed. No retries permitted until 2025-11-25 07:30:57.420070299 +0000 UTC m=+921.588266030 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0627b3a-26de-453c-ab7f-de79dae6c2fc-cert") pod "openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" (UID: "c0627b3a-26de-453c-ab7f-de79dae6c2fc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.445789 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vsz7\" (UniqueName: \"kubernetes.io/projected/6411a018-19de-4fba-bf72-6dfd5bd2ce29-kube-api-access-8vsz7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fmplr\" (UID: \"6411a018-19de-4fba-bf72-6dfd5bd2ce29\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.510995 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k"] Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.563801 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.692623 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4"] Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.833371 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-5zklz\" (UID: \"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.833745 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-5zklz\" (UID: \"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:30:56 crc kubenswrapper[5043]: E1125 07:30:56.833578 5043 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 07:30:56 crc kubenswrapper[5043]: E1125 07:30:56.833858 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-webhook-certs podName:f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4 nodeName:}" failed. No retries permitted until 2025-11-25 07:30:57.833829198 +0000 UTC m=+922.002024919 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-webhook-certs") pod "openstack-operator-controller-manager-7cd5954d9-5zklz" (UID: "f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4") : secret "webhook-server-cert" not found Nov 25 07:30:56 crc kubenswrapper[5043]: E1125 07:30:56.833925 5043 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 07:30:56 crc kubenswrapper[5043]: E1125 07:30:56.834008 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-metrics-certs podName:f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4 nodeName:}" failed. No retries permitted until 2025-11-25 07:30:57.833991433 +0000 UTC m=+922.002187144 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-metrics-certs") pod "openstack-operator-controller-manager-7cd5954d9-5zklz" (UID: "f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4") : secret "metrics-server-cert" not found Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.882330 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" event={"ID":"cdc9a1bf-b6d9-4a36-bcf8-55f87525da45","Type":"ContainerStarted","Data":"dfe1f11f288190520cefc007212865e5fed52948281877aa67359518f35b98bb"} Nov 25 07:30:56 crc kubenswrapper[5043]: I1125 07:30:56.887539 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" event={"ID":"d9a368e6-f4bb-4896-9a2d-f7ceed65e933","Type":"ContainerStarted","Data":"f27ffd61f24b9d987194168932169d2fe77aa50e460fb4fb08797ecdd7afe423"} Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.042806 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz"] Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.049307 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw"] Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.058465 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h"] Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.371740 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96"] Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.382123 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x"] Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.389580 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db"] Nov 25 07:30:57 crc kubenswrapper[5043]: W1125 07:30:57.406838 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3d7b5dc_2ced_4ac6_bdad_cd86342616a8.slice/crio-800133d80ae2d2a1de3ad575449ce06ac18f45b5e850527a2d613bc795125385 WatchSource:0}: Error finding container 800133d80ae2d2a1de3ad575449ce06ac18f45b5e850527a2d613bc795125385: Status 404 returned error can't find the container with id 800133d80ae2d2a1de3ad575449ce06ac18f45b5e850527a2d613bc795125385 Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.410264 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8"] Nov 25 07:30:57 crc kubenswrapper[5043]: W1125 07:30:57.411309 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a93d5b1_742c_4a37_94ef_a60ffb008520.slice/crio-5b7e61879e33dd05c5c404142b5224515e6c8016209343329fd82a184df54d27 WatchSource:0}: Error finding container 5b7e61879e33dd05c5c404142b5224515e6c8016209343329fd82a184df54d27: Status 404 returned error can't find the container with id 5b7e61879e33dd05c5c404142b5224515e6c8016209343329fd82a184df54d27 Nov 25 07:30:57 crc kubenswrapper[5043]: W1125 07:30:57.422079 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd643e47d_246d_4551_a63c_9b9374e684b2.slice/crio-acb4a5647efa7da50223f2ee03b5b3bc834a8e141bca7b2368540d1d12dc370a WatchSource:0}: Error finding container acb4a5647efa7da50223f2ee03b5b3bc834a8e141bca7b2368540d1d12dc370a: Status 404 returned error can't find the container with id acb4a5647efa7da50223f2ee03b5b3bc834a8e141bca7b2368540d1d12dc370a Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.429727 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-l77gb"] Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.436565 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hgh7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-748dc6576f-gvwj8_openstack-operators(ff874d31-8e5a-4c0b-8f9c-e63513a00483): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.436711 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz"] Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.438903 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hgh7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-748dc6576f-gvwj8_openstack-operators(ff874d31-8e5a-4c0b-8f9c-e63513a00483): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.438975 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g8xvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-gmcsx_openstack-operators(8ea2c827-762f-437d-ad30-a3568d7a4af1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.440772 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" podUID="ff874d31-8e5a-4c0b-8f9c-e63513a00483" Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.441260 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g8xvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-gmcsx_openstack-operators(8ea2c827-762f-437d-ad30-a3568d7a4af1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.442567 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" podUID="8ea2c827-762f-437d-ad30-a3568d7a4af1" Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.445089 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0627b3a-26de-453c-ab7f-de79dae6c2fc-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-h9jgk\" (UID: \"c0627b3a-26de-453c-ab7f-de79dae6c2fc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.445278 5043 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.445337 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0627b3a-26de-453c-ab7f-de79dae6c2fc-cert podName:c0627b3a-26de-453c-ab7f-de79dae6c2fc nodeName:}" failed. No retries permitted until 2025-11-25 07:30:59.445320427 +0000 UTC m=+923.613516148 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0627b3a-26de-453c-ab7f-de79dae6c2fc-cert") pod "openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" (UID: "c0627b3a-26de-453c-ab7f-de79dae6c2fc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.474785 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm"] Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.485083 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2"] Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.490049 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ct5xm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-58bb8d67cc-xx8rb_openstack-operators(c924fa47-53fb-4edc-8214-667ba1858ca2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.492000 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ct5xm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-58bb8d67cc-xx8rb_openstack-operators(c924fa47-53fb-4edc-8214-667ba1858ca2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.493197 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" podUID="c924fa47-53fb-4edc-8214-667ba1858ca2" Nov 25 07:30:57 crc kubenswrapper[5043]: W1125 07:30:57.493872 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01b420eb_b6d7_4534_9bdb_967c7a7c163f.slice/crio-1a900e4f6ad94fe0990da84d61f76437c553e6c6adbe90db138038663731ff36 WatchSource:0}: Error finding container 1a900e4f6ad94fe0990da84d61f76437c553e6c6adbe90db138038663731ff36: Status 404 returned error can't find the container with id 1a900e4f6ad94fe0990da84d61f76437c553e6c6adbe90db138038663731ff36 Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.495024 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2"] Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.496427 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d97q2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-vqd9c_openstack-operators(01b420eb-b6d7-4534-9bdb-967c7a7c163f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.498773 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d97q2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-vqd9c_openstack-operators(01b420eb-b6d7-4534-9bdb-967c7a7c163f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.500015 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" podUID="01b420eb-b6d7-4534-9bdb-967c7a7c163f" Nov 25 07:30:57 crc kubenswrapper[5043]: W1125 07:30:57.500626 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6411a018_19de_4fba_bf72_6dfd5bd2ce29.slice/crio-fb9f2596d87fe1d129ab8300034c9f2f43ec93be99f0739ec2eb70cbaa48a14d WatchSource:0}: Error finding container fb9f2596d87fe1d129ab8300034c9f2f43ec93be99f0739ec2eb70cbaa48a14d: Status 404 returned error can't find the container with id fb9f2596d87fe1d129ab8300034c9f2f43ec93be99f0739ec2eb70cbaa48a14d Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.504374 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8vsz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-fmplr_openstack-operators(6411a018-19de-4fba-bf72-6dfd5bd2ce29): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.505453 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" podUID="6411a018-19de-4fba-bf72-6dfd5bd2ce29" Nov 25 07:30:57 crc kubenswrapper[5043]: W1125 07:30:57.505888 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17e00d26_c8ad_4dfd_90df_8705b2cb2bde.slice/crio-39ce59a9e802c6f402e2a21278c186d63faead5bd63d862b8c56562dcac6fc2e WatchSource:0}: Error finding container 39ce59a9e802c6f402e2a21278c186d63faead5bd63d862b8c56562dcac6fc2e: Status 404 returned error can't find the container with id 39ce59a9e802c6f402e2a21278c186d63faead5bd63d862b8c56562dcac6fc2e Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.506238 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx"] Nov 25 07:30:57 crc kubenswrapper[5043]: W1125 07:30:57.506500 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4ff23e0_c2f3_4185_a7aa_df0f7e3596d2.slice/crio-6a484c331d859715bb0544830ca6706bc3be81e25febc152a76377e9536ab2a7 WatchSource:0}: Error finding container 6a484c331d859715bb0544830ca6706bc3be81e25febc152a76377e9536ab2a7: Status 404 returned error can't find the container with id 6a484c331d859715bb0544830ca6706bc3be81e25febc152a76377e9536ab2a7 Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.513784 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb"] Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.520248 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x"] Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.522114 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-84ccs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-d5ffq_openstack-operators(d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.522236 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7q7cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-54g5x_openstack-operators(17e00d26-c8ad-4dfd-90df-8705b2cb2bde): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.532910 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-84ccs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-d5ffq_openstack-operators(d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.533012 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7q7cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-54g5x_openstack-operators(17e00d26-c8ad-4dfd-90df-8705b2cb2bde): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.533071 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c"] Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.534108 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" podUID="17e00d26-c8ad-4dfd-90df-8705b2cb2bde" Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.534179 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" podUID="d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2" Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.535877 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr"] Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.542599 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-54g5x"] Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.554687 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq"] Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.851936 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-5zklz\" (UID: \"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.852061 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-5zklz\" (UID: \"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.852156 5043 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.852250 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-webhook-certs podName:f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4 nodeName:}" failed. No retries permitted until 2025-11-25 07:30:59.852223851 +0000 UTC m=+924.020419572 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-webhook-certs") pod "openstack-operator-controller-manager-7cd5954d9-5zklz" (UID: "f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4") : secret "webhook-server-cert" not found Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.861777 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-5zklz\" (UID: \"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.894695 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" event={"ID":"8ea2c827-762f-437d-ad30-a3568d7a4af1","Type":"ContainerStarted","Data":"e50e44f11c0255e97cc82ca23c2011ed646f8432247614d588058c28a2b03f1b"} Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.897324 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" event={"ID":"6411a018-19de-4fba-bf72-6dfd5bd2ce29","Type":"ContainerStarted","Data":"fb9f2596d87fe1d129ab8300034c9f2f43ec93be99f0739ec2eb70cbaa48a14d"} Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.898893 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" event={"ID":"01b420eb-b6d7-4534-9bdb-967c7a7c163f","Type":"ContainerStarted","Data":"1a900e4f6ad94fe0990da84d61f76437c553e6c6adbe90db138038663731ff36"} Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.899318 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" podUID="6411a018-19de-4fba-bf72-6dfd5bd2ce29" Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.900932 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" podUID="01b420eb-b6d7-4534-9bdb-967c7a7c163f" Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.901848 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" podUID="8ea2c827-762f-437d-ad30-a3568d7a4af1" Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.902315 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" event={"ID":"d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2","Type":"ContainerStarted","Data":"6a484c331d859715bb0544830ca6706bc3be81e25febc152a76377e9536ab2a7"} Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.903905 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" event={"ID":"ff874d31-8e5a-4c0b-8f9c-e63513a00483","Type":"ContainerStarted","Data":"7f7a8b1591a1a3bb3a0ecd666b7dabc5297a36a90380f336cad594c3eeeef53e"} Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.904564 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" podUID="d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2" Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.907486 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" event={"ID":"92e57762-522f-4a9d-8b03-732ba4dad5c1","Type":"ContainerStarted","Data":"090419986c1bd3df035b7512fbc76ba93b9b6d89316304c5e7432d942f7ccad2"} Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.913520 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" event={"ID":"e5c62587-28b4-4a1e-8b73-ee9624ca7163","Type":"ContainerStarted","Data":"6a96b3e322c841ef224147552350beb2dde402495a9ed7cbc058eaa1f2357a4e"} Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.916508 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" podUID="ff874d31-8e5a-4c0b-8f9c-e63513a00483" Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.917306 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" event={"ID":"e020a857-3730-44f5-8e98-3e59868fbde6","Type":"ContainerStarted","Data":"57bafb2aa4e7b4cc05460b345665e227d46048fd7b1b40bc80350fba3b0ac131"} Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.919003 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" event={"ID":"17e00d26-c8ad-4dfd-90df-8705b2cb2bde","Type":"ContainerStarted","Data":"39ce59a9e802c6f402e2a21278c186d63faead5bd63d862b8c56562dcac6fc2e"} Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.921194 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" podUID="17e00d26-c8ad-4dfd-90df-8705b2cb2bde" Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.921862 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" event={"ID":"c20803a7-e9a9-441a-9e61-84673f3c02e8","Type":"ContainerStarted","Data":"702cbc02fd05a6e88ac63d7950d3b406ec58382e22b69d036a7bc1b4de539dfd"} Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.923844 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" event={"ID":"9c9e4471-0205-478a-8717-be36a19d2a02","Type":"ContainerStarted","Data":"3de70dbf854a6ca350560c616dc563c5386473a09196a8b5e2b2f7c01ffd039a"} Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.925559 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" event={"ID":"869f93a1-d6e7-46ff-a60f-0e997412a2fa","Type":"ContainerStarted","Data":"10fc22137c6a818ec13a8ee7da62ad47f1db8daaff572055230252f02f7ea446"} Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.934681 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" event={"ID":"c924fa47-53fb-4edc-8214-667ba1858ca2","Type":"ContainerStarted","Data":"3a334030dd458f41973d6101ab399a9f38d40325d21d77676de0b41183cba494"} Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.939947 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" event={"ID":"020c7247-0b68-419b-b97f-f7b0ea800142","Type":"ContainerStarted","Data":"cf6e000f322e2b546998e69183b7c4c44d25edaf79045edd957311566c1d584b"} Nov 25 07:30:57 crc kubenswrapper[5043]: E1125 07:30:57.940125 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" podUID="c924fa47-53fb-4edc-8214-667ba1858ca2" Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.941421 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" event={"ID":"a3d7b5dc-2ced-4ac6-bdad-cd86342616a8","Type":"ContainerStarted","Data":"800133d80ae2d2a1de3ad575449ce06ac18f45b5e850527a2d613bc795125385"} Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.942970 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" event={"ID":"b7005e58-64d2-470b-a3e7-22b67b7fbfb3","Type":"ContainerStarted","Data":"640959a734113ca5e26e0d802bb632f1a629def2cccd158bf74d359c4cd36b78"} Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.944369 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" event={"ID":"bb800a2f-1864-47be-931b-7b99f7c7354f","Type":"ContainerStarted","Data":"5117dd7fba67bc11ecf778ceb9ba707cd459338ca1167aa783ae9da4803aec73"} Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.950723 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" event={"ID":"8a93d5b1-742c-4a37-94ef-a60ffb008520","Type":"ContainerStarted","Data":"5b7e61879e33dd05c5c404142b5224515e6c8016209343329fd82a184df54d27"} Nov 25 07:30:57 crc kubenswrapper[5043]: I1125 07:30:57.952492 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" event={"ID":"d643e47d-246d-4551-a63c-9b9374e684b2","Type":"ContainerStarted","Data":"acb4a5647efa7da50223f2ee03b5b3bc834a8e141bca7b2368540d1d12dc370a"} Nov 25 07:30:58 crc kubenswrapper[5043]: E1125 07:30:58.962827 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" podUID="d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2" Nov 25 07:30:58 crc kubenswrapper[5043]: E1125 07:30:58.962950 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" podUID="6411a018-19de-4fba-bf72-6dfd5bd2ce29" Nov 25 07:30:58 crc kubenswrapper[5043]: E1125 07:30:58.963047 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" podUID="01b420eb-b6d7-4534-9bdb-967c7a7c163f" Nov 25 07:30:58 crc kubenswrapper[5043]: E1125 07:30:58.964169 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" podUID="ff874d31-8e5a-4c0b-8f9c-e63513a00483" Nov 25 07:30:58 crc kubenswrapper[5043]: E1125 07:30:58.965045 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" podUID="17e00d26-c8ad-4dfd-90df-8705b2cb2bde" Nov 25 07:30:58 crc kubenswrapper[5043]: E1125 07:30:58.966406 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" podUID="8ea2c827-762f-437d-ad30-a3568d7a4af1" Nov 25 07:30:58 crc kubenswrapper[5043]: E1125 07:30:58.966863 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" podUID="c924fa47-53fb-4edc-8214-667ba1858ca2" Nov 25 07:30:59 crc kubenswrapper[5043]: I1125 07:30:59.482922 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0627b3a-26de-453c-ab7f-de79dae6c2fc-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-h9jgk\" (UID: \"c0627b3a-26de-453c-ab7f-de79dae6c2fc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" Nov 25 07:30:59 crc kubenswrapper[5043]: I1125 07:30:59.500654 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0627b3a-26de-453c-ab7f-de79dae6c2fc-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-h9jgk\" (UID: \"c0627b3a-26de-453c-ab7f-de79dae6c2fc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" Nov 25 07:30:59 crc kubenswrapper[5043]: I1125 07:30:59.630419 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" Nov 25 07:30:59 crc kubenswrapper[5043]: I1125 07:30:59.890452 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-5zklz\" (UID: \"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:30:59 crc kubenswrapper[5043]: I1125 07:30:59.899559 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-5zklz\" (UID: \"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:31:00 crc kubenswrapper[5043]: I1125 07:31:00.089720 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:31:05 crc kubenswrapper[5043]: I1125 07:31:05.333710 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz"] Nov 25 07:31:05 crc kubenswrapper[5043]: I1125 07:31:05.618186 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk"] Nov 25 07:31:05 crc kubenswrapper[5043]: E1125 07:31:05.745150 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7mpt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-d5cc86f4b-x8q8x_openstack-operators(92e57762-522f-4a9d-8b03-732ba4dad5c1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 07:31:05 crc kubenswrapper[5043]: E1125 07:31:05.750703 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" podUID="92e57762-522f-4a9d-8b03-732ba4dad5c1" Nov 25 07:31:05 crc kubenswrapper[5043]: E1125 07:31:05.756687 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2977,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-6w2db_openstack-operators(869f93a1-d6e7-46ff-a60f-0e997412a2fa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 07:31:05 crc kubenswrapper[5043]: E1125 07:31:05.761340 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" podUID="869f93a1-d6e7-46ff-a60f-0e997412a2fa" Nov 25 07:31:05 crc kubenswrapper[5043]: E1125 07:31:05.765066 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7wkk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-m9bmz_openstack-operators(a3d7b5dc-2ced-4ac6-bdad-cd86342616a8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 07:31:05 crc kubenswrapper[5043]: E1125 07:31:05.767809 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" podUID="a3d7b5dc-2ced-4ac6-bdad-cd86342616a8" Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.028849 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" event={"ID":"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4","Type":"ContainerStarted","Data":"dcf42ae8791a95a572ce1da470fe184394a8ec4a5f1857651e57f15991574a1c"} Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.028923 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" event={"ID":"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4","Type":"ContainerStarted","Data":"7a7a90e7e2a27e5700e2a9abb849fcc644423d1b9a855921dc2cde2b124d62f7"} Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.029735 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.053599 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" event={"ID":"d9a368e6-f4bb-4896-9a2d-f7ceed65e933","Type":"ContainerStarted","Data":"09a9187c3cd96f5127ba1f2c8ddac211c9e9a4971656966882053dc5114494a7"} Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.060191 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" event={"ID":"8a93d5b1-742c-4a37-94ef-a60ffb008520","Type":"ContainerStarted","Data":"7e6df44537f5c843dc7130d710043819e66cfec71a4a1934c0d83886f5196d5d"} Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.096413 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" event={"ID":"92e57762-522f-4a9d-8b03-732ba4dad5c1","Type":"ContainerStarted","Data":"fdcc2555f3276b445e3eb69ddab4960acc10bdfc496bc930b16e6fbbdd02dd14"} Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.097345 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" Nov 25 07:31:06 crc kubenswrapper[5043]: E1125 07:31:06.105720 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" podUID="92e57762-522f-4a9d-8b03-732ba4dad5c1" Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.152247 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" event={"ID":"c0627b3a-26de-453c-ab7f-de79dae6c2fc","Type":"ContainerStarted","Data":"ae422cf42028b3bcc984288dc00743df1dc837918db63d9beb2f21947406ca40"} Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.154118 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" podStartSLOduration=10.154093221 podStartE2EDuration="10.154093221s" podCreationTimestamp="2025-11-25 07:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:31:06.122024692 +0000 UTC m=+930.290220423" watchObservedRunningTime="2025-11-25 07:31:06.154093221 +0000 UTC m=+930.322288942" Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.171640 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" event={"ID":"b7005e58-64d2-470b-a3e7-22b67b7fbfb3","Type":"ContainerStarted","Data":"fd64632e50e5a9b5de31e554a161e37f1ff06f24eeef96edb33fd301f5e626d6"} Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.208189 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" event={"ID":"cdc9a1bf-b6d9-4a36-bcf8-55f87525da45","Type":"ContainerStarted","Data":"e7b731d42024281828b164ebdbdddedefd6c195e2846703f49280395adb6a405"} Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.225787 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" event={"ID":"bb800a2f-1864-47be-931b-7b99f7c7354f","Type":"ContainerStarted","Data":"643f940ad9b1b4bf6af5490ee68c3afd5242e986566d067c28b39d69b190fd3c"} Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.242306 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" event={"ID":"869f93a1-d6e7-46ff-a60f-0e997412a2fa","Type":"ContainerStarted","Data":"d21834bce874b3701dc38b9c5066bb896f13eb3ac417fbb76c592d954a213e6e"} Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.242623 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" Nov 25 07:31:06 crc kubenswrapper[5043]: E1125 07:31:06.251965 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" podUID="869f93a1-d6e7-46ff-a60f-0e997412a2fa" Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.259121 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" event={"ID":"c20803a7-e9a9-441a-9e61-84673f3c02e8","Type":"ContainerStarted","Data":"dd56f467de78cb2cee958e6ac10eab27df30928770e51472e78d51e88114f549"} Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.272976 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" event={"ID":"9c9e4471-0205-478a-8717-be36a19d2a02","Type":"ContainerStarted","Data":"4e788d44528e241b3b9fafc1ecc8bf8d25b10d37ecf73e193d0faac1929de95f"} Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.289163 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" event={"ID":"d643e47d-246d-4551-a63c-9b9374e684b2","Type":"ContainerStarted","Data":"10181b15c5c44ae9e997127bb8a7167c3baf07b67b7d069085dc1bb6dc879907"} Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.294740 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" event={"ID":"e5c62587-28b4-4a1e-8b73-ee9624ca7163","Type":"ContainerStarted","Data":"9c6dbeb38cd3dfb8c343653925e36700b00d27f719b0dd1dbb1d0052b8e48f60"} Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.330858 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" event={"ID":"a3d7b5dc-2ced-4ac6-bdad-cd86342616a8","Type":"ContainerStarted","Data":"a4ee369f479f675860452c1383a4868f5569acf2e937320937f82299143a1feb"} Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.330945 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" Nov 25 07:31:06 crc kubenswrapper[5043]: E1125 07:31:06.331836 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" podUID="a3d7b5dc-2ced-4ac6-bdad-cd86342616a8" Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.340334 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" event={"ID":"020c7247-0b68-419b-b97f-f7b0ea800142","Type":"ContainerStarted","Data":"f820e8840659a45a5cf9fbf92f9fc7a11ed6d79d311dd0d20d00b98324a10cd9"} Nov 25 07:31:06 crc kubenswrapper[5043]: I1125 07:31:06.347164 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" event={"ID":"e020a857-3730-44f5-8e98-3e59868fbde6","Type":"ContainerStarted","Data":"8e22f79993da45fc8bf69d68104a5e249ca19914d332d0828ecbb0a241aa87ea"} Nov 25 07:31:07 crc kubenswrapper[5043]: E1125 07:31:07.353802 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" podUID="869f93a1-d6e7-46ff-a60f-0e997412a2fa" Nov 25 07:31:07 crc kubenswrapper[5043]: E1125 07:31:07.355284 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" podUID="92e57762-522f-4a9d-8b03-732ba4dad5c1" Nov 25 07:31:07 crc kubenswrapper[5043]: E1125 07:31:07.355509 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" podUID="a3d7b5dc-2ced-4ac6-bdad-cd86342616a8" Nov 25 07:31:10 crc kubenswrapper[5043]: I1125 07:31:10.107705 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 07:31:10 crc kubenswrapper[5043]: I1125 07:31:10.380812 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" event={"ID":"cdc9a1bf-b6d9-4a36-bcf8-55f87525da45","Type":"ContainerStarted","Data":"c7cb8c64ec905f6a5114de4fb7be524a4e3fbbe7908508bc67a283067efed0ab"} Nov 25 07:31:10 crc kubenswrapper[5043]: I1125 07:31:10.382124 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" Nov 25 07:31:10 crc kubenswrapper[5043]: I1125 07:31:10.387291 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" event={"ID":"d9a368e6-f4bb-4896-9a2d-f7ceed65e933","Type":"ContainerStarted","Data":"da102931eb89fad35c590bcc12d2177812f9335e5f9dc1aff1aedf2d29aa49b4"} Nov 25 07:31:10 crc kubenswrapper[5043]: I1125 07:31:10.387516 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" Nov 25 07:31:10 crc kubenswrapper[5043]: I1125 07:31:10.389250 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" Nov 25 07:31:10 crc kubenswrapper[5043]: I1125 07:31:10.389983 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" Nov 25 07:31:10 crc kubenswrapper[5043]: I1125 07:31:10.403048 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" event={"ID":"c0627b3a-26de-453c-ab7f-de79dae6c2fc","Type":"ContainerStarted","Data":"6a5bdd21ab37b51fea30586384891e1db8d95f0cf0d26974adae54aba711d02f"} Nov 25 07:31:10 crc kubenswrapper[5043]: I1125 07:31:10.408187 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" podStartSLOduration=2.016921775 podStartE2EDuration="15.40816921s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:56.579764139 +0000 UTC m=+920.747959860" lastFinishedPulling="2025-11-25 07:31:09.971011554 +0000 UTC m=+934.139207295" observedRunningTime="2025-11-25 07:31:10.406953517 +0000 UTC m=+934.575149238" watchObservedRunningTime="2025-11-25 07:31:10.40816921 +0000 UTC m=+934.576364931" Nov 25 07:31:10 crc kubenswrapper[5043]: I1125 07:31:10.434054 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" podStartSLOduration=2.167444679 podStartE2EDuration="15.434033523s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:56.744506274 +0000 UTC m=+920.912701995" lastFinishedPulling="2025-11-25 07:31:10.011095128 +0000 UTC m=+934.179290839" observedRunningTime="2025-11-25 07:31:10.430317354 +0000 UTC m=+934.598513075" watchObservedRunningTime="2025-11-25 07:31:10.434033523 +0000 UTC m=+934.602229264" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.410343 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" event={"ID":"bb800a2f-1864-47be-931b-7b99f7c7354f","Type":"ContainerStarted","Data":"acb513eadb8ee524c9067b99f68a2713619f4c1b9297964114bc7e4cbac2f773"} Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.410567 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.413095 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.413551 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" event={"ID":"e020a857-3730-44f5-8e98-3e59868fbde6","Type":"ContainerStarted","Data":"a5744378f56313383d1a420d8c0ca197d41f5758dfef07fc63ffc17418f1795d"} Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.413917 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.415233 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" event={"ID":"d643e47d-246d-4551-a63c-9b9374e684b2","Type":"ContainerStarted","Data":"c1450765aa065e4fd6523dda6c55c3057e544f639b86223822d0b90c5cc2dab6"} Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.415441 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.416394 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" event={"ID":"c20803a7-e9a9-441a-9e61-84673f3c02e8","Type":"ContainerStarted","Data":"9fdc7ed69b385a17c9c4890688ce74e793f71b25bdc6cc325dec5b9cf667ce01"} Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.416904 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.417367 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.418318 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" event={"ID":"b7005e58-64d2-470b-a3e7-22b67b7fbfb3","Type":"ContainerStarted","Data":"6ae101cf39fb9703ea581fcf708533e16faa8d2cd278718ed35c21a97674d8cd"} Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.418640 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.420063 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" event={"ID":"8a93d5b1-742c-4a37-94ef-a60ffb008520","Type":"ContainerStarted","Data":"ea423636cd6fb232cd2576f8f0b75975efdf3303ba6341221a143b3df741dc41"} Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.420137 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.420184 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.420248 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.422315 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" event={"ID":"c0627b3a-26de-453c-ab7f-de79dae6c2fc","Type":"ContainerStarted","Data":"58b2bc30c0ec58c570d0cfdd8f5bde44cf619e4e2c53c670362111e76129f721"} Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.422392 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.423863 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" event={"ID":"e5c62587-28b4-4a1e-8b73-ee9624ca7163","Type":"ContainerStarted","Data":"76f21f46ad455f0230b470d37aa80660f9b1b0b4e1e27e9a82ca79ab92e6a413"} Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.424304 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.427267 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.427986 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" event={"ID":"9c9e4471-0205-478a-8717-be36a19d2a02","Type":"ContainerStarted","Data":"408c4e8abd89a58794893ea5470bea677c68b6d47a9b0193834dae464b69ed0c"} Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.428011 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" podStartSLOduration=3.860688808 podStartE2EDuration="16.428002971s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.428584738 +0000 UTC m=+921.596780459" lastFinishedPulling="2025-11-25 07:31:09.995898891 +0000 UTC m=+934.164094622" observedRunningTime="2025-11-25 07:31:11.424512538 +0000 UTC m=+935.592708259" watchObservedRunningTime="2025-11-25 07:31:11.428002971 +0000 UTC m=+935.596198692" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.428421 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.431201 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.436246 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" event={"ID":"020c7247-0b68-419b-b97f-f7b0ea800142","Type":"ContainerStarted","Data":"16c38ea1a17cdc30f59b54d8bc63034d9c9928650c9b75cb255ecc796ecf2dac"} Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.436676 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.442216 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.446300 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" podStartSLOduration=3.589449658 podStartE2EDuration="16.446280531s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.388867803 +0000 UTC m=+921.557063524" lastFinishedPulling="2025-11-25 07:31:10.245698676 +0000 UTC m=+934.413894397" observedRunningTime="2025-11-25 07:31:11.442114209 +0000 UTC m=+935.610309940" watchObservedRunningTime="2025-11-25 07:31:11.446280531 +0000 UTC m=+935.614476252" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.474126 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" podStartSLOduration=3.776001009 podStartE2EDuration="16.474107507s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.429195234 +0000 UTC m=+921.597390955" lastFinishedPulling="2025-11-25 07:31:10.127301732 +0000 UTC m=+934.295497453" observedRunningTime="2025-11-25 07:31:11.46115214 +0000 UTC m=+935.629347861" watchObservedRunningTime="2025-11-25 07:31:11.474107507 +0000 UTC m=+935.642303228" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.502723 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" podStartSLOduration=3.5735733339999998 podStartE2EDuration="16.502700454s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.042774288 +0000 UTC m=+921.210970029" lastFinishedPulling="2025-11-25 07:31:09.971901428 +0000 UTC m=+934.140097149" observedRunningTime="2025-11-25 07:31:11.492347816 +0000 UTC m=+935.660543557" watchObservedRunningTime="2025-11-25 07:31:11.502700454 +0000 UTC m=+935.670896165" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.516749 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" podStartSLOduration=3.52152855 podStartE2EDuration="16.516713679s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.04544337 +0000 UTC m=+921.213639101" lastFinishedPulling="2025-11-25 07:31:10.040628519 +0000 UTC m=+934.208824230" observedRunningTime="2025-11-25 07:31:11.512044563 +0000 UTC m=+935.680240284" watchObservedRunningTime="2025-11-25 07:31:11.516713679 +0000 UTC m=+935.684909410" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.544158 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" podStartSLOduration=3.526045759 podStartE2EDuration="16.544138234s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.050229117 +0000 UTC m=+921.218424868" lastFinishedPulling="2025-11-25 07:31:10.068321612 +0000 UTC m=+934.236517343" observedRunningTime="2025-11-25 07:31:11.528176986 +0000 UTC m=+935.696372717" watchObservedRunningTime="2025-11-25 07:31:11.544138234 +0000 UTC m=+935.712333955" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.551888 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" podStartSLOduration=3.434494817 podStartE2EDuration="16.551862751s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.434507547 +0000 UTC m=+921.602703268" lastFinishedPulling="2025-11-25 07:31:10.551875481 +0000 UTC m=+934.720071202" observedRunningTime="2025-11-25 07:31:11.551402758 +0000 UTC m=+935.719598479" watchObservedRunningTime="2025-11-25 07:31:11.551862751 +0000 UTC m=+935.720058472" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.601954 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" podStartSLOduration=12.905165558 podStartE2EDuration="16.601934042s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:31:05.646892517 +0000 UTC m=+929.815088238" lastFinishedPulling="2025-11-25 07:31:09.343661001 +0000 UTC m=+933.511856722" observedRunningTime="2025-11-25 07:31:11.598962073 +0000 UTC m=+935.767157814" watchObservedRunningTime="2025-11-25 07:31:11.601934042 +0000 UTC m=+935.770129783" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.631117 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" podStartSLOduration=3.414141121 podStartE2EDuration="16.631100794s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.434168768 +0000 UTC m=+921.602364489" lastFinishedPulling="2025-11-25 07:31:10.651128441 +0000 UTC m=+934.819324162" observedRunningTime="2025-11-25 07:31:11.627826347 +0000 UTC m=+935.796022068" watchObservedRunningTime="2025-11-25 07:31:11.631100794 +0000 UTC m=+935.799296515" Nov 25 07:31:11 crc kubenswrapper[5043]: I1125 07:31:11.651540 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" podStartSLOduration=3.5351075229999998 podStartE2EDuration="16.651520832s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.402100328 +0000 UTC m=+921.570296049" lastFinishedPulling="2025-11-25 07:31:10.518513637 +0000 UTC m=+934.686709358" observedRunningTime="2025-11-25 07:31:11.649340453 +0000 UTC m=+935.817536184" watchObservedRunningTime="2025-11-25 07:31:11.651520832 +0000 UTC m=+935.819716553" Nov 25 07:31:12 crc kubenswrapper[5043]: I1125 07:31:12.450052 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" Nov 25 07:31:12 crc kubenswrapper[5043]: I1125 07:31:12.451957 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" Nov 25 07:31:13 crc kubenswrapper[5043]: I1125 07:31:13.457729 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" event={"ID":"c924fa47-53fb-4edc-8214-667ba1858ca2","Type":"ContainerStarted","Data":"5501422c8e9790f59c9175ea12b58c36c7e59cea2a4f35de4f35deb58e75ab2b"} Nov 25 07:31:13 crc kubenswrapper[5043]: I1125 07:31:13.458166 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" event={"ID":"c924fa47-53fb-4edc-8214-667ba1858ca2","Type":"ContainerStarted","Data":"6372410c9df390654f8059f44c1a1b1b2e22f25ec250efa6dc38a56358fb12aa"} Nov 25 07:31:13 crc kubenswrapper[5043]: I1125 07:31:13.458437 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" Nov 25 07:31:13 crc kubenswrapper[5043]: I1125 07:31:13.478568 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" podStartSLOduration=3.646066396 podStartE2EDuration="18.478537256s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.489777947 +0000 UTC m=+921.657973678" lastFinishedPulling="2025-11-25 07:31:12.322248787 +0000 UTC m=+936.490444538" observedRunningTime="2025-11-25 07:31:13.475947016 +0000 UTC m=+937.644142737" watchObservedRunningTime="2025-11-25 07:31:13.478537256 +0000 UTC m=+937.646733067" Nov 25 07:31:15 crc kubenswrapper[5043]: I1125 07:31:15.487430 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" event={"ID":"6411a018-19de-4fba-bf72-6dfd5bd2ce29","Type":"ContainerStarted","Data":"fd664c3c8f603b160c088ca6f5c54f8b9055cc009c04f010450910fb0b0f3cf5"} Nov 25 07:31:15 crc kubenswrapper[5043]: I1125 07:31:15.501755 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" podStartSLOduration=1.7405668699999999 podStartE2EDuration="19.501737868s" podCreationTimestamp="2025-11-25 07:30:56 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.504277707 +0000 UTC m=+921.672473418" lastFinishedPulling="2025-11-25 07:31:15.265448695 +0000 UTC m=+939.433644416" observedRunningTime="2025-11-25 07:31:15.501072539 +0000 UTC m=+939.669268280" watchObservedRunningTime="2025-11-25 07:31:15.501737868 +0000 UTC m=+939.669933589" Nov 25 07:31:16 crc kubenswrapper[5043]: I1125 07:31:16.006908 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" Nov 25 07:31:16 crc kubenswrapper[5043]: I1125 07:31:16.076850 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" Nov 25 07:31:16 crc kubenswrapper[5043]: I1125 07:31:16.393919 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" Nov 25 07:31:17 crc kubenswrapper[5043]: I1125 07:31:17.276155 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:31:17 crc kubenswrapper[5043]: I1125 07:31:17.276217 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:31:19 crc kubenswrapper[5043]: I1125 07:31:19.652138 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" Nov 25 07:31:20 crc kubenswrapper[5043]: I1125 07:31:20.535534 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" event={"ID":"17e00d26-c8ad-4dfd-90df-8705b2cb2bde","Type":"ContainerStarted","Data":"c4c834e2e83d7758640db27c2817e3c118bbd2322c6c3cb503f8309435e4f170"} Nov 25 07:31:20 crc kubenswrapper[5043]: I1125 07:31:20.538280 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" event={"ID":"01b420eb-b6d7-4534-9bdb-967c7a7c163f","Type":"ContainerStarted","Data":"3ac19bcf572e7516f131d1f6e2edbd9ef287f6773f873685a1d448c89ed2dbf7"} Nov 25 07:31:20 crc kubenswrapper[5043]: I1125 07:31:20.552838 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" event={"ID":"8ea2c827-762f-437d-ad30-a3568d7a4af1","Type":"ContainerStarted","Data":"3e8f789775d000389ff8a62066f32344548d5cac70d5dab18156d3183ce897a3"} Nov 25 07:31:20 crc kubenswrapper[5043]: I1125 07:31:20.554156 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" event={"ID":"ff874d31-8e5a-4c0b-8f9c-e63513a00483","Type":"ContainerStarted","Data":"c116bccae148120904962a84cc7cadffc6f0e0b759da2d7c946cf5e5381228e7"} Nov 25 07:31:20 crc kubenswrapper[5043]: I1125 07:31:20.557114 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" event={"ID":"92e57762-522f-4a9d-8b03-732ba4dad5c1","Type":"ContainerStarted","Data":"a9dbdc41813bafe694b87ca59150e113391b42d6ffae251531394f507b6b3583"} Nov 25 07:31:20 crc kubenswrapper[5043]: I1125 07:31:20.576174 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" podStartSLOduration=17.86785225 podStartE2EDuration="25.576150822s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.398868521 +0000 UTC m=+921.567064242" lastFinishedPulling="2025-11-25 07:31:05.107167073 +0000 UTC m=+929.275362814" observedRunningTime="2025-11-25 07:31:20.570246564 +0000 UTC m=+944.738442295" watchObservedRunningTime="2025-11-25 07:31:20.576150822 +0000 UTC m=+944.744346563" Nov 25 07:31:20 crc kubenswrapper[5043]: I1125 07:31:20.605987 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" podStartSLOduration=17.978743260999998 podStartE2EDuration="25.60596657s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.489462639 +0000 UTC m=+921.657658360" lastFinishedPulling="2025-11-25 07:31:05.116685948 +0000 UTC m=+929.284881669" observedRunningTime="2025-11-25 07:31:20.600559065 +0000 UTC m=+944.768754796" watchObservedRunningTime="2025-11-25 07:31:20.60596657 +0000 UTC m=+944.774162301" Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.567303 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" event={"ID":"d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2","Type":"ContainerStarted","Data":"78a66f3e10bd00d358084f26e00fb47a905588a937766c24964ad53150b7d565"} Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.567365 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" event={"ID":"d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2","Type":"ContainerStarted","Data":"d3f5354af511fabd515578336d4da1424eb4476b5bf22d380545c5a98527aa12"} Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.567563 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.569754 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" event={"ID":"869f93a1-d6e7-46ff-a60f-0e997412a2fa","Type":"ContainerStarted","Data":"748e0781f09a0ac74b02f3f7c29cced72c372971910ad3754f30e405f6b061e6"} Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.571378 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" event={"ID":"8ea2c827-762f-437d-ad30-a3568d7a4af1","Type":"ContainerStarted","Data":"763d3d4970b2ee2ec30ad5a616c0284c8a8d3f1289c603483a3ded32745dfe8b"} Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.571519 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.574368 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" event={"ID":"ff874d31-8e5a-4c0b-8f9c-e63513a00483","Type":"ContainerStarted","Data":"dcfa013574ef343c57dc0f96b9f343c8604dc807a25411cbc0f5fe6e9f90d9ea"} Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.574821 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.578682 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" event={"ID":"a3d7b5dc-2ced-4ac6-bdad-cd86342616a8","Type":"ContainerStarted","Data":"2ca409ad598a4c90ad5c65148d9569a61a39b14fe5f474ecf3a736d46ec82b30"} Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.582530 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" event={"ID":"17e00d26-c8ad-4dfd-90df-8705b2cb2bde","Type":"ContainerStarted","Data":"2760afa396cad1f405208b2c64d6914a5868d3d9c54d4f536514ad58900787b1"} Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.587905 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" event={"ID":"01b420eb-b6d7-4534-9bdb-967c7a7c163f","Type":"ContainerStarted","Data":"6a872f9088ab0c0b62fe27c8852a4d1e384ec136e4731ba6f7bec4bfd6b67a29"} Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.588295 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.594421 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" podStartSLOduration=4.112486705 podStartE2EDuration="26.5944024s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.5219794 +0000 UTC m=+921.690175121" lastFinishedPulling="2025-11-25 07:31:20.003895075 +0000 UTC m=+944.172090816" observedRunningTime="2025-11-25 07:31:21.587113055 +0000 UTC m=+945.755308796" watchObservedRunningTime="2025-11-25 07:31:21.5944024 +0000 UTC m=+945.762598131" Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.606752 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" podStartSLOduration=4.038443812 podStartE2EDuration="26.606730901s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.436392377 +0000 UTC m=+921.604588098" lastFinishedPulling="2025-11-25 07:31:20.004679466 +0000 UTC m=+944.172875187" observedRunningTime="2025-11-25 07:31:21.603827833 +0000 UTC m=+945.772023634" watchObservedRunningTime="2025-11-25 07:31:21.606730901 +0000 UTC m=+945.774926622" Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.626759 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" podStartSLOduration=4.142456179 podStartE2EDuration="26.626736867s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.522178036 +0000 UTC m=+921.690373757" lastFinishedPulling="2025-11-25 07:31:20.006458724 +0000 UTC m=+944.174654445" observedRunningTime="2025-11-25 07:31:21.621916878 +0000 UTC m=+945.790112599" watchObservedRunningTime="2025-11-25 07:31:21.626736867 +0000 UTC m=+945.794932588" Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.644905 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" podStartSLOduration=4.063470432 podStartE2EDuration="26.644883163s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.438868453 +0000 UTC m=+921.607064174" lastFinishedPulling="2025-11-25 07:31:20.020281184 +0000 UTC m=+944.188476905" observedRunningTime="2025-11-25 07:31:21.637542897 +0000 UTC m=+945.805738648" watchObservedRunningTime="2025-11-25 07:31:21.644883163 +0000 UTC m=+945.813078914" Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.664535 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" podStartSLOduration=18.981028772 podStartE2EDuration="26.664512469s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.42942705 +0000 UTC m=+921.597622771" lastFinishedPulling="2025-11-25 07:31:05.112910727 +0000 UTC m=+929.281106468" observedRunningTime="2025-11-25 07:31:21.655776245 +0000 UTC m=+945.823972026" watchObservedRunningTime="2025-11-25 07:31:21.664512469 +0000 UTC m=+945.832708210" Nov 25 07:31:21 crc kubenswrapper[5043]: I1125 07:31:21.688259 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" podStartSLOduration=4.17904245 podStartE2EDuration="26.688234315s" podCreationTimestamp="2025-11-25 07:30:55 +0000 UTC" firstStartedPulling="2025-11-25 07:30:57.496273642 +0000 UTC m=+921.664469363" lastFinishedPulling="2025-11-25 07:31:20.005465477 +0000 UTC m=+944.173661228" observedRunningTime="2025-11-25 07:31:21.681083054 +0000 UTC m=+945.849278795" watchObservedRunningTime="2025-11-25 07:31:21.688234315 +0000 UTC m=+945.856430056" Nov 25 07:31:22 crc kubenswrapper[5043]: I1125 07:31:22.594845 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" Nov 25 07:31:25 crc kubenswrapper[5043]: I1125 07:31:25.803935 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" Nov 25 07:31:25 crc kubenswrapper[5043]: I1125 07:31:25.811140 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" Nov 25 07:31:26 crc kubenswrapper[5043]: I1125 07:31:26.103177 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" Nov 25 07:31:26 crc kubenswrapper[5043]: I1125 07:31:26.163594 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" Nov 25 07:31:26 crc kubenswrapper[5043]: I1125 07:31:26.173927 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" Nov 25 07:31:26 crc kubenswrapper[5043]: I1125 07:31:26.206650 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" Nov 25 07:31:40 crc kubenswrapper[5043]: I1125 07:31:40.982722 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-m8pcg"] Nov 25 07:31:40 crc kubenswrapper[5043]: I1125 07:31:40.984459 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-m8pcg" Nov 25 07:31:40 crc kubenswrapper[5043]: I1125 07:31:40.989314 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 07:31:40 crc kubenswrapper[5043]: I1125 07:31:40.989328 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 07:31:40 crc kubenswrapper[5043]: I1125 07:31:40.989380 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 07:31:40 crc kubenswrapper[5043]: I1125 07:31:40.989775 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-tgg77" Nov 25 07:31:40 crc kubenswrapper[5043]: I1125 07:31:40.994461 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-m8pcg"] Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.030093 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6584b49599-q9wmf"] Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.031133 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-q9wmf" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.035023 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.041650 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-q9wmf"] Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.043933 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd70907-0a1c-4874-bf1b-54e142d543d2-config\") pod \"dnsmasq-dns-7bdd77c89-m8pcg\" (UID: \"9cd70907-0a1c-4874-bf1b-54e142d543d2\") " pod="openstack/dnsmasq-dns-7bdd77c89-m8pcg" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.044014 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47v7w\" (UniqueName: \"kubernetes.io/projected/9cd70907-0a1c-4874-bf1b-54e142d543d2-kube-api-access-47v7w\") pod \"dnsmasq-dns-7bdd77c89-m8pcg\" (UID: \"9cd70907-0a1c-4874-bf1b-54e142d543d2\") " pod="openstack/dnsmasq-dns-7bdd77c89-m8pcg" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.145552 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7f2729-ce28-4493-9d03-ecdbf8e37425-config\") pod \"dnsmasq-dns-6584b49599-q9wmf\" (UID: \"1b7f2729-ce28-4493-9d03-ecdbf8e37425\") " pod="openstack/dnsmasq-dns-6584b49599-q9wmf" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.145666 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd70907-0a1c-4874-bf1b-54e142d543d2-config\") pod \"dnsmasq-dns-7bdd77c89-m8pcg\" (UID: \"9cd70907-0a1c-4874-bf1b-54e142d543d2\") " pod="openstack/dnsmasq-dns-7bdd77c89-m8pcg" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.145711 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b7f2729-ce28-4493-9d03-ecdbf8e37425-dns-svc\") pod \"dnsmasq-dns-6584b49599-q9wmf\" (UID: \"1b7f2729-ce28-4493-9d03-ecdbf8e37425\") " pod="openstack/dnsmasq-dns-6584b49599-q9wmf" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.145736 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47v7w\" (UniqueName: \"kubernetes.io/projected/9cd70907-0a1c-4874-bf1b-54e142d543d2-kube-api-access-47v7w\") pod \"dnsmasq-dns-7bdd77c89-m8pcg\" (UID: \"9cd70907-0a1c-4874-bf1b-54e142d543d2\") " pod="openstack/dnsmasq-dns-7bdd77c89-m8pcg" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.145969 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g4p2\" (UniqueName: \"kubernetes.io/projected/1b7f2729-ce28-4493-9d03-ecdbf8e37425-kube-api-access-2g4p2\") pod \"dnsmasq-dns-6584b49599-q9wmf\" (UID: \"1b7f2729-ce28-4493-9d03-ecdbf8e37425\") " pod="openstack/dnsmasq-dns-6584b49599-q9wmf" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.146493 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd70907-0a1c-4874-bf1b-54e142d543d2-config\") pod \"dnsmasq-dns-7bdd77c89-m8pcg\" (UID: \"9cd70907-0a1c-4874-bf1b-54e142d543d2\") " pod="openstack/dnsmasq-dns-7bdd77c89-m8pcg" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.173328 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47v7w\" (UniqueName: \"kubernetes.io/projected/9cd70907-0a1c-4874-bf1b-54e142d543d2-kube-api-access-47v7w\") pod \"dnsmasq-dns-7bdd77c89-m8pcg\" (UID: \"9cd70907-0a1c-4874-bf1b-54e142d543d2\") " pod="openstack/dnsmasq-dns-7bdd77c89-m8pcg" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.247162 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g4p2\" (UniqueName: \"kubernetes.io/projected/1b7f2729-ce28-4493-9d03-ecdbf8e37425-kube-api-access-2g4p2\") pod \"dnsmasq-dns-6584b49599-q9wmf\" (UID: \"1b7f2729-ce28-4493-9d03-ecdbf8e37425\") " pod="openstack/dnsmasq-dns-6584b49599-q9wmf" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.247219 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7f2729-ce28-4493-9d03-ecdbf8e37425-config\") pod \"dnsmasq-dns-6584b49599-q9wmf\" (UID: \"1b7f2729-ce28-4493-9d03-ecdbf8e37425\") " pod="openstack/dnsmasq-dns-6584b49599-q9wmf" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.247255 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b7f2729-ce28-4493-9d03-ecdbf8e37425-dns-svc\") pod \"dnsmasq-dns-6584b49599-q9wmf\" (UID: \"1b7f2729-ce28-4493-9d03-ecdbf8e37425\") " pod="openstack/dnsmasq-dns-6584b49599-q9wmf" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.248113 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b7f2729-ce28-4493-9d03-ecdbf8e37425-dns-svc\") pod \"dnsmasq-dns-6584b49599-q9wmf\" (UID: \"1b7f2729-ce28-4493-9d03-ecdbf8e37425\") " pod="openstack/dnsmasq-dns-6584b49599-q9wmf" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.248576 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7f2729-ce28-4493-9d03-ecdbf8e37425-config\") pod \"dnsmasq-dns-6584b49599-q9wmf\" (UID: \"1b7f2729-ce28-4493-9d03-ecdbf8e37425\") " pod="openstack/dnsmasq-dns-6584b49599-q9wmf" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.262391 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g4p2\" (UniqueName: \"kubernetes.io/projected/1b7f2729-ce28-4493-9d03-ecdbf8e37425-kube-api-access-2g4p2\") pod \"dnsmasq-dns-6584b49599-q9wmf\" (UID: \"1b7f2729-ce28-4493-9d03-ecdbf8e37425\") " pod="openstack/dnsmasq-dns-6584b49599-q9wmf" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.303674 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-m8pcg" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.357863 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-q9wmf" Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.740040 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-m8pcg"] Nov 25 07:31:41 crc kubenswrapper[5043]: W1125 07:31:41.748421 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cd70907_0a1c_4874_bf1b_54e142d543d2.slice/crio-81cbae2871d8e6518faaa34abd7f380cb5f638ee3f0fa8d17cb81e063c0cdd02 WatchSource:0}: Error finding container 81cbae2871d8e6518faaa34abd7f380cb5f638ee3f0fa8d17cb81e063c0cdd02: Status 404 returned error can't find the container with id 81cbae2871d8e6518faaa34abd7f380cb5f638ee3f0fa8d17cb81e063c0cdd02 Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.751427 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 07:31:41 crc kubenswrapper[5043]: I1125 07:31:41.867189 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-q9wmf"] Nov 25 07:31:42 crc kubenswrapper[5043]: I1125 07:31:42.761037 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdd77c89-m8pcg" event={"ID":"9cd70907-0a1c-4874-bf1b-54e142d543d2","Type":"ContainerStarted","Data":"81cbae2871d8e6518faaa34abd7f380cb5f638ee3f0fa8d17cb81e063c0cdd02"} Nov 25 07:31:42 crc kubenswrapper[5043]: I1125 07:31:42.763842 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6584b49599-q9wmf" event={"ID":"1b7f2729-ce28-4493-9d03-ecdbf8e37425","Type":"ContainerStarted","Data":"f1a5bffd8c085a2e3bd9877f7b7e02b1948980f48420bc06641068f9569b9693"} Nov 25 07:31:43 crc kubenswrapper[5043]: I1125 07:31:43.813508 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-m8pcg"] Nov 25 07:31:43 crc kubenswrapper[5043]: I1125 07:31:43.840950 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-vggdj"] Nov 25 07:31:43 crc kubenswrapper[5043]: I1125 07:31:43.843745 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" Nov 25 07:31:43 crc kubenswrapper[5043]: I1125 07:31:43.852358 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-vggdj"] Nov 25 07:31:43 crc kubenswrapper[5043]: I1125 07:31:43.996682 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-dns-svc\") pod \"dnsmasq-dns-7c6d9948dc-vggdj\" (UID: \"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9\") " pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" Nov 25 07:31:43 crc kubenswrapper[5043]: I1125 07:31:43.996772 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtj7f\" (UniqueName: \"kubernetes.io/projected/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-kube-api-access-rtj7f\") pod \"dnsmasq-dns-7c6d9948dc-vggdj\" (UID: \"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9\") " pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" Nov 25 07:31:43 crc kubenswrapper[5043]: I1125 07:31:43.996810 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-config\") pod \"dnsmasq-dns-7c6d9948dc-vggdj\" (UID: \"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9\") " pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.097692 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-dns-svc\") pod \"dnsmasq-dns-7c6d9948dc-vggdj\" (UID: \"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9\") " pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.097764 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtj7f\" (UniqueName: \"kubernetes.io/projected/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-kube-api-access-rtj7f\") pod \"dnsmasq-dns-7c6d9948dc-vggdj\" (UID: \"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9\") " pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.097803 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-config\") pod \"dnsmasq-dns-7c6d9948dc-vggdj\" (UID: \"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9\") " pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.098715 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-dns-svc\") pod \"dnsmasq-dns-7c6d9948dc-vggdj\" (UID: \"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9\") " pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.099093 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-config\") pod \"dnsmasq-dns-7c6d9948dc-vggdj\" (UID: \"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9\") " pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.139944 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtj7f\" (UniqueName: \"kubernetes.io/projected/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-kube-api-access-rtj7f\") pod \"dnsmasq-dns-7c6d9948dc-vggdj\" (UID: \"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9\") " pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.158098 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-q9wmf"] Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.181043 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.192722 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-45dtg"] Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.194947 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-45dtg" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.206410 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-45dtg"] Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.301372 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a9a5bed-e2a6-4bda-a395-1f82840192a4-dns-svc\") pod \"dnsmasq-dns-6486446b9f-45dtg\" (UID: \"8a9a5bed-e2a6-4bda-a395-1f82840192a4\") " pod="openstack/dnsmasq-dns-6486446b9f-45dtg" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.301735 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6lrp\" (UniqueName: \"kubernetes.io/projected/8a9a5bed-e2a6-4bda-a395-1f82840192a4-kube-api-access-g6lrp\") pod \"dnsmasq-dns-6486446b9f-45dtg\" (UID: \"8a9a5bed-e2a6-4bda-a395-1f82840192a4\") " pod="openstack/dnsmasq-dns-6486446b9f-45dtg" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.301782 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9a5bed-e2a6-4bda-a395-1f82840192a4-config\") pod \"dnsmasq-dns-6486446b9f-45dtg\" (UID: \"8a9a5bed-e2a6-4bda-a395-1f82840192a4\") " pod="openstack/dnsmasq-dns-6486446b9f-45dtg" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.403074 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a9a5bed-e2a6-4bda-a395-1f82840192a4-dns-svc\") pod \"dnsmasq-dns-6486446b9f-45dtg\" (UID: \"8a9a5bed-e2a6-4bda-a395-1f82840192a4\") " pod="openstack/dnsmasq-dns-6486446b9f-45dtg" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.403121 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6lrp\" (UniqueName: \"kubernetes.io/projected/8a9a5bed-e2a6-4bda-a395-1f82840192a4-kube-api-access-g6lrp\") pod \"dnsmasq-dns-6486446b9f-45dtg\" (UID: \"8a9a5bed-e2a6-4bda-a395-1f82840192a4\") " pod="openstack/dnsmasq-dns-6486446b9f-45dtg" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.403170 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9a5bed-e2a6-4bda-a395-1f82840192a4-config\") pod \"dnsmasq-dns-6486446b9f-45dtg\" (UID: \"8a9a5bed-e2a6-4bda-a395-1f82840192a4\") " pod="openstack/dnsmasq-dns-6486446b9f-45dtg" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.404403 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a9a5bed-e2a6-4bda-a395-1f82840192a4-dns-svc\") pod \"dnsmasq-dns-6486446b9f-45dtg\" (UID: \"8a9a5bed-e2a6-4bda-a395-1f82840192a4\") " pod="openstack/dnsmasq-dns-6486446b9f-45dtg" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.404679 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9a5bed-e2a6-4bda-a395-1f82840192a4-config\") pod \"dnsmasq-dns-6486446b9f-45dtg\" (UID: \"8a9a5bed-e2a6-4bda-a395-1f82840192a4\") " pod="openstack/dnsmasq-dns-6486446b9f-45dtg" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.430539 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6lrp\" (UniqueName: \"kubernetes.io/projected/8a9a5bed-e2a6-4bda-a395-1f82840192a4-kube-api-access-g6lrp\") pod \"dnsmasq-dns-6486446b9f-45dtg\" (UID: \"8a9a5bed-e2a6-4bda-a395-1f82840192a4\") " pod="openstack/dnsmasq-dns-6486446b9f-45dtg" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.605952 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-45dtg" Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.717930 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-vggdj"] Nov 25 07:31:44 crc kubenswrapper[5043]: I1125 07:31:44.795732 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" event={"ID":"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9","Type":"ContainerStarted","Data":"bbc53ae2a47512565f66510040db2a02b6f0afa8250612778ef0820cdeb36660"} Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.007892 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.009021 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.010745 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.011255 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.011458 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.011553 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.011677 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.011843 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-67cq4" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.019248 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 07:31:45 crc kubenswrapper[5043]: W1125 07:31:45.053279 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a9a5bed_e2a6_4bda_a395_1f82840192a4.slice/crio-ff0500235f3cf6ee3030c6d9c2908e8e05e918d87fa17caf883dfaa647d10919 WatchSource:0}: Error finding container ff0500235f3cf6ee3030c6d9c2908e8e05e918d87fa17caf883dfaa647d10919: Status 404 returned error can't find the container with id ff0500235f3cf6ee3030c6d9c2908e8e05e918d87fa17caf883dfaa647d10919 Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.053864 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.060975 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-45dtg"] Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.120092 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.120149 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.120175 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-config-data\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.120227 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.120247 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.120277 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.120327 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwh84\" (UniqueName: \"kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-kube-api-access-mwh84\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.120350 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f4796f0-ec1b-4f62-bdad-9927841c80db-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.120370 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f4796f0-ec1b-4f62-bdad-9927841c80db-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.120422 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.120442 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.221375 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.221444 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwh84\" (UniqueName: \"kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-kube-api-access-mwh84\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.221467 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f4796f0-ec1b-4f62-bdad-9927841c80db-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.221486 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f4796f0-ec1b-4f62-bdad-9927841c80db-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.221513 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.221533 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.221559 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.221585 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.221639 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-config-data\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.221679 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.221708 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.222725 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.222981 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.223401 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.224323 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-config-data\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.224498 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-server-conf\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.227454 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f4796f0-ec1b-4f62-bdad-9927841c80db-pod-info\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.227535 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.228129 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f4796f0-ec1b-4f62-bdad-9927841c80db-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.229264 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.229364 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.238916 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwh84\" (UniqueName: \"kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-kube-api-access-mwh84\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.282789 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.300368 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.301459 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.311353 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rkxqg" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.311645 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.311815 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.311980 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.312086 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.312167 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.313275 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.325209 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.355394 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.428615 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d61213dd-2002-44b6-8904-21c0a754ae66-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.428693 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.428718 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.428813 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.428855 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.428888 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d61213dd-2002-44b6-8904-21c0a754ae66-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.428920 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.429011 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.429075 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8qjv\" (UniqueName: \"kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-kube-api-access-s8qjv\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.429151 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.429173 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.530705 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8qjv\" (UniqueName: \"kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-kube-api-access-s8qjv\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.530781 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.530801 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.530834 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d61213dd-2002-44b6-8904-21c0a754ae66-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.530878 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.531505 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.531536 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.531578 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.531624 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d61213dd-2002-44b6-8904-21c0a754ae66-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.531649 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.531670 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.531999 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.532442 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.532593 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.532632 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.532789 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.533145 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.536268 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d61213dd-2002-44b6-8904-21c0a754ae66-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.536921 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d61213dd-2002-44b6-8904-21c0a754ae66-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.539287 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.539422 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.552453 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8qjv\" (UniqueName: \"kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-kube-api-access-s8qjv\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.556788 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.643460 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.819648 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-45dtg" event={"ID":"8a9a5bed-e2a6-4bda-a395-1f82840192a4","Type":"ContainerStarted","Data":"ff0500235f3cf6ee3030c6d9c2908e8e05e918d87fa17caf883dfaa647d10919"} Nov 25 07:31:45 crc kubenswrapper[5043]: I1125 07:31:45.835474 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:31:46 crc kubenswrapper[5043]: I1125 07:31:46.937216 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 25 07:31:46 crc kubenswrapper[5043]: I1125 07:31:46.938759 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 07:31:46 crc kubenswrapper[5043]: I1125 07:31:46.944379 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 07:31:46 crc kubenswrapper[5043]: I1125 07:31:46.945251 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 07:31:46 crc kubenswrapper[5043]: I1125 07:31:46.945934 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 07:31:46 crc kubenswrapper[5043]: I1125 07:31:46.946149 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-ghk4c" Nov 25 07:31:46 crc kubenswrapper[5043]: I1125 07:31:46.951422 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 07:31:46 crc kubenswrapper[5043]: I1125 07:31:46.953986 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.058820 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwccd\" (UniqueName: \"kubernetes.io/projected/961b9ca6-9248-485e-9361-1e9bc78e9058-kube-api-access-lwccd\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.058907 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/961b9ca6-9248-485e-9361-1e9bc78e9058-config-data-default\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.059008 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/961b9ca6-9248-485e-9361-1e9bc78e9058-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.059075 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961b9ca6-9248-485e-9361-1e9bc78e9058-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.059202 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/961b9ca6-9248-485e-9361-1e9bc78e9058-operator-scripts\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.059370 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/961b9ca6-9248-485e-9361-1e9bc78e9058-kolla-config\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.059422 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.059450 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/961b9ca6-9248-485e-9361-1e9bc78e9058-config-data-generated\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.160598 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.160681 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/961b9ca6-9248-485e-9361-1e9bc78e9058-config-data-generated\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.160709 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwccd\" (UniqueName: \"kubernetes.io/projected/961b9ca6-9248-485e-9361-1e9bc78e9058-kube-api-access-lwccd\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.160731 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/961b9ca6-9248-485e-9361-1e9bc78e9058-config-data-default\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.160759 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/961b9ca6-9248-485e-9361-1e9bc78e9058-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.160786 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961b9ca6-9248-485e-9361-1e9bc78e9058-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.160820 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/961b9ca6-9248-485e-9361-1e9bc78e9058-operator-scripts\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.160856 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/961b9ca6-9248-485e-9361-1e9bc78e9058-kolla-config\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.161056 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.161092 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/961b9ca6-9248-485e-9361-1e9bc78e9058-config-data-generated\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.162109 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/961b9ca6-9248-485e-9361-1e9bc78e9058-config-data-default\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.162380 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/961b9ca6-9248-485e-9361-1e9bc78e9058-operator-scripts\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.162824 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/961b9ca6-9248-485e-9361-1e9bc78e9058-kolla-config\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.166012 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/961b9ca6-9248-485e-9361-1e9bc78e9058-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.166505 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/961b9ca6-9248-485e-9361-1e9bc78e9058-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.184395 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwccd\" (UniqueName: \"kubernetes.io/projected/961b9ca6-9248-485e-9361-1e9bc78e9058-kube-api-access-lwccd\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.196434 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"961b9ca6-9248-485e-9361-1e9bc78e9058\") " pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.269767 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.282325 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:31:47 crc kubenswrapper[5043]: I1125 07:31:47.282391 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.347776 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.351789 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.355046 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.357284 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.357452 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-xhrcb" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.357554 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.363808 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.382096 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a22a0679-f2ea-46b8-88f5-d010717699d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.382204 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.382245 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsn47\" (UniqueName: \"kubernetes.io/projected/a22a0679-f2ea-46b8-88f5-d010717699d1-kube-api-access-hsn47\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.382304 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a22a0679-f2ea-46b8-88f5-d010717699d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.382406 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a22a0679-f2ea-46b8-88f5-d010717699d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.382498 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a22a0679-f2ea-46b8-88f5-d010717699d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.382612 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a22a0679-f2ea-46b8-88f5-d010717699d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.382668 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a22a0679-f2ea-46b8-88f5-d010717699d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.488449 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a22a0679-f2ea-46b8-88f5-d010717699d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.488505 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.488526 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsn47\" (UniqueName: \"kubernetes.io/projected/a22a0679-f2ea-46b8-88f5-d010717699d1-kube-api-access-hsn47\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.488559 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a22a0679-f2ea-46b8-88f5-d010717699d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.488626 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a22a0679-f2ea-46b8-88f5-d010717699d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.488650 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a22a0679-f2ea-46b8-88f5-d010717699d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.488673 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a22a0679-f2ea-46b8-88f5-d010717699d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.488752 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a22a0679-f2ea-46b8-88f5-d010717699d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.489483 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a22a0679-f2ea-46b8-88f5-d010717699d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.490183 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a22a0679-f2ea-46b8-88f5-d010717699d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.490586 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a22a0679-f2ea-46b8-88f5-d010717699d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.496042 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a22a0679-f2ea-46b8-88f5-d010717699d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.496653 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.516383 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a22a0679-f2ea-46b8-88f5-d010717699d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.528148 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a22a0679-f2ea-46b8-88f5-d010717699d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.539401 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsn47\" (UniqueName: \"kubernetes.io/projected/a22a0679-f2ea-46b8-88f5-d010717699d1-kube-api-access-hsn47\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.559721 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"a22a0679-f2ea-46b8-88f5-d010717699d1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.668174 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.804949 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.805913 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.815313 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.815687 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.815765 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-2ppmm" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.831579 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.894263 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3077d275-063c-4a4d-97bf-b1b006e32f6f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3077d275-063c-4a4d-97bf-b1b006e32f6f\") " pod="openstack/memcached-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.894357 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3077d275-063c-4a4d-97bf-b1b006e32f6f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3077d275-063c-4a4d-97bf-b1b006e32f6f\") " pod="openstack/memcached-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.894462 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3077d275-063c-4a4d-97bf-b1b006e32f6f-config-data\") pod \"memcached-0\" (UID: \"3077d275-063c-4a4d-97bf-b1b006e32f6f\") " pod="openstack/memcached-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.894503 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3077d275-063c-4a4d-97bf-b1b006e32f6f-kolla-config\") pod \"memcached-0\" (UID: \"3077d275-063c-4a4d-97bf-b1b006e32f6f\") " pod="openstack/memcached-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.894716 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dndqv\" (UniqueName: \"kubernetes.io/projected/3077d275-063c-4a4d-97bf-b1b006e32f6f-kube-api-access-dndqv\") pod \"memcached-0\" (UID: \"3077d275-063c-4a4d-97bf-b1b006e32f6f\") " pod="openstack/memcached-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.996568 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dndqv\" (UniqueName: \"kubernetes.io/projected/3077d275-063c-4a4d-97bf-b1b006e32f6f-kube-api-access-dndqv\") pod \"memcached-0\" (UID: \"3077d275-063c-4a4d-97bf-b1b006e32f6f\") " pod="openstack/memcached-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.996683 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3077d275-063c-4a4d-97bf-b1b006e32f6f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3077d275-063c-4a4d-97bf-b1b006e32f6f\") " pod="openstack/memcached-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.996756 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3077d275-063c-4a4d-97bf-b1b006e32f6f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3077d275-063c-4a4d-97bf-b1b006e32f6f\") " pod="openstack/memcached-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.996787 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3077d275-063c-4a4d-97bf-b1b006e32f6f-config-data\") pod \"memcached-0\" (UID: \"3077d275-063c-4a4d-97bf-b1b006e32f6f\") " pod="openstack/memcached-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.996811 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3077d275-063c-4a4d-97bf-b1b006e32f6f-kolla-config\") pod \"memcached-0\" (UID: \"3077d275-063c-4a4d-97bf-b1b006e32f6f\") " pod="openstack/memcached-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.997832 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3077d275-063c-4a4d-97bf-b1b006e32f6f-config-data\") pod \"memcached-0\" (UID: \"3077d275-063c-4a4d-97bf-b1b006e32f6f\") " pod="openstack/memcached-0" Nov 25 07:31:48 crc kubenswrapper[5043]: I1125 07:31:48.997864 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3077d275-063c-4a4d-97bf-b1b006e32f6f-kolla-config\") pod \"memcached-0\" (UID: \"3077d275-063c-4a4d-97bf-b1b006e32f6f\") " pod="openstack/memcached-0" Nov 25 07:31:49 crc kubenswrapper[5043]: I1125 07:31:49.019231 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3077d275-063c-4a4d-97bf-b1b006e32f6f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3077d275-063c-4a4d-97bf-b1b006e32f6f\") " pod="openstack/memcached-0" Nov 25 07:31:49 crc kubenswrapper[5043]: I1125 07:31:49.019876 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3077d275-063c-4a4d-97bf-b1b006e32f6f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3077d275-063c-4a4d-97bf-b1b006e32f6f\") " pod="openstack/memcached-0" Nov 25 07:31:49 crc kubenswrapper[5043]: I1125 07:31:49.026094 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dndqv\" (UniqueName: \"kubernetes.io/projected/3077d275-063c-4a4d-97bf-b1b006e32f6f-kube-api-access-dndqv\") pod \"memcached-0\" (UID: \"3077d275-063c-4a4d-97bf-b1b006e32f6f\") " pod="openstack/memcached-0" Nov 25 07:31:49 crc kubenswrapper[5043]: I1125 07:31:49.122650 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 07:31:49 crc kubenswrapper[5043]: W1125 07:31:49.954116 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f4796f0_ec1b_4f62_bdad_9927841c80db.slice/crio-3be916abebf188127e2f0f68992f57c060d6aa909f912af80b15209bffc7384c WatchSource:0}: Error finding container 3be916abebf188127e2f0f68992f57c060d6aa909f912af80b15209bffc7384c: Status 404 returned error can't find the container with id 3be916abebf188127e2f0f68992f57c060d6aa909f912af80b15209bffc7384c Nov 25 07:31:50 crc kubenswrapper[5043]: I1125 07:31:50.699555 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:31:50 crc kubenswrapper[5043]: I1125 07:31:50.701517 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 07:31:50 crc kubenswrapper[5043]: I1125 07:31:50.710507 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:31:50 crc kubenswrapper[5043]: I1125 07:31:50.718443 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-cglcl" Nov 25 07:31:50 crc kubenswrapper[5043]: I1125 07:31:50.721364 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddgrp\" (UniqueName: \"kubernetes.io/projected/4c5266b5-29ce-457c-b2e3-bbfab352768f-kube-api-access-ddgrp\") pod \"kube-state-metrics-0\" (UID: \"4c5266b5-29ce-457c-b2e3-bbfab352768f\") " pod="openstack/kube-state-metrics-0" Nov 25 07:31:50 crc kubenswrapper[5043]: I1125 07:31:50.872765 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddgrp\" (UniqueName: \"kubernetes.io/projected/4c5266b5-29ce-457c-b2e3-bbfab352768f-kube-api-access-ddgrp\") pod \"kube-state-metrics-0\" (UID: \"4c5266b5-29ce-457c-b2e3-bbfab352768f\") " pod="openstack/kube-state-metrics-0" Nov 25 07:31:50 crc kubenswrapper[5043]: I1125 07:31:50.926806 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5f4796f0-ec1b-4f62-bdad-9927841c80db","Type":"ContainerStarted","Data":"3be916abebf188127e2f0f68992f57c060d6aa909f912af80b15209bffc7384c"} Nov 25 07:31:50 crc kubenswrapper[5043]: I1125 07:31:50.931940 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddgrp\" (UniqueName: \"kubernetes.io/projected/4c5266b5-29ce-457c-b2e3-bbfab352768f-kube-api-access-ddgrp\") pod \"kube-state-metrics-0\" (UID: \"4c5266b5-29ce-457c-b2e3-bbfab352768f\") " pod="openstack/kube-state-metrics-0" Nov 25 07:31:51 crc kubenswrapper[5043]: I1125 07:31:51.021699 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.010180 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pvbbc"] Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.012198 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.014236 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.014953 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.015235 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-l87tx" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.025357 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-s57wr"] Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.026982 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.037489 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pvbbc"] Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.045761 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-s57wr"] Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.121698 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4bd061b9-bc56-4e7f-b7eb-d12486d15712-var-run\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.122078 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4bd061b9-bc56-4e7f-b7eb-d12486d15712-var-log-ovn\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.122111 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrq88\" (UniqueName: \"kubernetes.io/projected/4bd061b9-bc56-4e7f-b7eb-d12486d15712-kube-api-access-vrq88\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.122169 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bd061b9-bc56-4e7f-b7eb-d12486d15712-combined-ca-bundle\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.122225 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bd061b9-bc56-4e7f-b7eb-d12486d15712-ovn-controller-tls-certs\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.122284 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gfvf\" (UniqueName: \"kubernetes.io/projected/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-kube-api-access-8gfvf\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.122355 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-var-run\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.122392 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bd061b9-bc56-4e7f-b7eb-d12486d15712-scripts\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.122419 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4bd061b9-bc56-4e7f-b7eb-d12486d15712-var-run-ovn\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.122440 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-var-lib\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.122467 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-etc-ovs\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.122492 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-var-log\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.122530 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-scripts\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.223545 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bd061b9-bc56-4e7f-b7eb-d12486d15712-scripts\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.223587 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-var-lib\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.223608 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4bd061b9-bc56-4e7f-b7eb-d12486d15712-var-run-ovn\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.223640 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-etc-ovs\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.223659 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-var-log\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.223683 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-scripts\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.223724 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4bd061b9-bc56-4e7f-b7eb-d12486d15712-var-run\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.223755 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4bd061b9-bc56-4e7f-b7eb-d12486d15712-var-log-ovn\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.223777 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrq88\" (UniqueName: \"kubernetes.io/projected/4bd061b9-bc56-4e7f-b7eb-d12486d15712-kube-api-access-vrq88\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.223823 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bd061b9-bc56-4e7f-b7eb-d12486d15712-combined-ca-bundle\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.223861 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bd061b9-bc56-4e7f-b7eb-d12486d15712-ovn-controller-tls-certs\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.223882 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gfvf\" (UniqueName: \"kubernetes.io/projected/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-kube-api-access-8gfvf\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.223913 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-var-run\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.225040 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-var-run\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.225785 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4bd061b9-bc56-4e7f-b7eb-d12486d15712-scripts\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.226075 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-var-lib\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.226200 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4bd061b9-bc56-4e7f-b7eb-d12486d15712-var-run-ovn\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.226220 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4bd061b9-bc56-4e7f-b7eb-d12486d15712-var-run\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.226246 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-var-log\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.226383 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4bd061b9-bc56-4e7f-b7eb-d12486d15712-var-log-ovn\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.226523 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-scripts\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.226533 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-etc-ovs\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.229552 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4bd061b9-bc56-4e7f-b7eb-d12486d15712-ovn-controller-tls-certs\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.240574 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bd061b9-bc56-4e7f-b7eb-d12486d15712-combined-ca-bundle\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.241269 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gfvf\" (UniqueName: \"kubernetes.io/projected/8cb5a8c6-ad9b-4b36-8766-e67dd27797f7-kube-api-access-8gfvf\") pod \"ovn-controller-ovs-s57wr\" (UID: \"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7\") " pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.241465 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrq88\" (UniqueName: \"kubernetes.io/projected/4bd061b9-bc56-4e7f-b7eb-d12486d15712-kube-api-access-vrq88\") pod \"ovn-controller-pvbbc\" (UID: \"4bd061b9-bc56-4e7f-b7eb-d12486d15712\") " pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.330720 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pvbbc" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.348311 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.935102 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.936675 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.943222 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-5xhsx" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.943250 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.943239 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.943304 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.943361 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 07:31:54 crc kubenswrapper[5043]: I1125 07:31:54.949391 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.036091 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/568f6d22-7338-4a78-83ac-79125bd64fb9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.036168 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.036210 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w92g5\" (UniqueName: \"kubernetes.io/projected/568f6d22-7338-4a78-83ac-79125bd64fb9-kube-api-access-w92g5\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.036235 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/568f6d22-7338-4a78-83ac-79125bd64fb9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.036256 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/568f6d22-7338-4a78-83ac-79125bd64fb9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.036655 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568f6d22-7338-4a78-83ac-79125bd64fb9-config\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.036723 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568f6d22-7338-4a78-83ac-79125bd64fb9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.036819 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/568f6d22-7338-4a78-83ac-79125bd64fb9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.138531 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568f6d22-7338-4a78-83ac-79125bd64fb9-config\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.138576 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568f6d22-7338-4a78-83ac-79125bd64fb9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.138608 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/568f6d22-7338-4a78-83ac-79125bd64fb9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.138668 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/568f6d22-7338-4a78-83ac-79125bd64fb9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.138703 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.138808 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w92g5\" (UniqueName: \"kubernetes.io/projected/568f6d22-7338-4a78-83ac-79125bd64fb9-kube-api-access-w92g5\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.138847 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/568f6d22-7338-4a78-83ac-79125bd64fb9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.138873 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/568f6d22-7338-4a78-83ac-79125bd64fb9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.139340 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.139595 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568f6d22-7338-4a78-83ac-79125bd64fb9-config\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.140137 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/568f6d22-7338-4a78-83ac-79125bd64fb9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.140258 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/568f6d22-7338-4a78-83ac-79125bd64fb9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.142428 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/568f6d22-7338-4a78-83ac-79125bd64fb9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.158694 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/568f6d22-7338-4a78-83ac-79125bd64fb9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.159243 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w92g5\" (UniqueName: \"kubernetes.io/projected/568f6d22-7338-4a78-83ac-79125bd64fb9-kube-api-access-w92g5\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.159409 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568f6d22-7338-4a78-83ac-79125bd64fb9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.172747 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"568f6d22-7338-4a78-83ac-79125bd64fb9\") " pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:55 crc kubenswrapper[5043]: I1125 07:31:55.255875 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.008798 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.011211 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.014562 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-smg7n" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.014992 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.015252 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.017715 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.033342 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.086647 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.086694 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpkjf\" (UniqueName: \"kubernetes.io/projected/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-kube-api-access-jpkjf\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.086731 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.086756 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-config\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.086777 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.086878 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.087072 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.087203 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.190035 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.190094 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpkjf\" (UniqueName: \"kubernetes.io/projected/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-kube-api-access-jpkjf\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.190140 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.190193 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-config\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.190222 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.190246 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.190322 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.190382 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.191507 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-config\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.191817 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.192020 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.195313 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.196398 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.202449 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.206373 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.207330 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpkjf\" (UniqueName: \"kubernetes.io/projected/610ebd16-bde0-4b4b-acf4-6d15e0324fd6-kube-api-access-jpkjf\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.214364 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"610ebd16-bde0-4b4b-acf4-6d15e0324fd6\") " pod="openstack/ovsdbserver-sb-0" Nov 25 07:31:58 crc kubenswrapper[5043]: I1125 07:31:58.342219 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 07:32:01 crc kubenswrapper[5043]: E1125 07:32:01.396924 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 25 07:32:01 crc kubenswrapper[5043]: E1125 07:32:01.397580 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6lrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6486446b9f-45dtg_openstack(8a9a5bed-e2a6-4bda-a395-1f82840192a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 07:32:01 crc kubenswrapper[5043]: E1125 07:32:01.398890 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6486446b9f-45dtg" podUID="8a9a5bed-e2a6-4bda-a395-1f82840192a4" Nov 25 07:32:01 crc kubenswrapper[5043]: E1125 07:32:01.426550 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 25 07:32:01 crc kubenswrapper[5043]: E1125 07:32:01.426827 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2g4p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6584b49599-q9wmf_openstack(1b7f2729-ce28-4493-9d03-ecdbf8e37425): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 07:32:01 crc kubenswrapper[5043]: E1125 07:32:01.428334 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6584b49599-q9wmf" podUID="1b7f2729-ce28-4493-9d03-ecdbf8e37425" Nov 25 07:32:01 crc kubenswrapper[5043]: E1125 07:32:01.462554 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 25 07:32:01 crc kubenswrapper[5043]: E1125 07:32:01.462794 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rtj7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7c6d9948dc-vggdj_openstack(7efc89bf-d04b-4b9e-a86e-3eab0f122fa9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 07:32:01 crc kubenswrapper[5043]: E1125 07:32:01.464159 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" podUID="7efc89bf-d04b-4b9e-a86e-3eab0f122fa9" Nov 25 07:32:01 crc kubenswrapper[5043]: E1125 07:32:01.472511 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 25 07:32:01 crc kubenswrapper[5043]: E1125 07:32:01.472738 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47v7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7bdd77c89-m8pcg_openstack(9cd70907-0a1c-4874-bf1b-54e142d543d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 07:32:01 crc kubenswrapper[5043]: E1125 07:32:01.480055 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7bdd77c89-m8pcg" podUID="9cd70907-0a1c-4874-bf1b-54e142d543d2" Nov 25 07:32:01 crc kubenswrapper[5043]: I1125 07:32:01.995085 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.008375 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.016096 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.023483 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.045923 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"961b9ca6-9248-485e-9361-1e9bc78e9058","Type":"ContainerStarted","Data":"7ee55161d02fd2a2e1ae84a11c77e3d5bea9fc19e569e1e124d5471ba88bb3cb"} Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.048747 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d61213dd-2002-44b6-8904-21c0a754ae66","Type":"ContainerStarted","Data":"c4aecb1b8b6e8eb92ce822b4563ba79c5fcfde9840c43c87e738769a9e5b8f5b"} Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.049514 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3077d275-063c-4a4d-97bf-b1b006e32f6f","Type":"ContainerStarted","Data":"6220079688e7ce44511da762811d0633a9746eac91f185f25ef20d14b3e56256"} Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.050589 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4c5266b5-29ce-457c-b2e3-bbfab352768f","Type":"ContainerStarted","Data":"e64269df9dc9ff35988c93971a1a113b35103f08ff45f1a16140643def4e0e2d"} Nov 25 07:32:02 crc kubenswrapper[5043]: E1125 07:32:02.062661 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba\\\"\"" pod="openstack/dnsmasq-dns-6486446b9f-45dtg" podUID="8a9a5bed-e2a6-4bda-a395-1f82840192a4" Nov 25 07:32:02 crc kubenswrapper[5043]: E1125 07:32:02.062869 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba\\\"\"" pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" podUID="7efc89bf-d04b-4b9e-a86e-3eab0f122fa9" Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.152734 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.223388 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pvbbc"] Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.356150 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.477415 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-m8pcg" Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.482477 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-q9wmf" Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.561480 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd70907-0a1c-4874-bf1b-54e142d543d2-config\") pod \"9cd70907-0a1c-4874-bf1b-54e142d543d2\" (UID: \"9cd70907-0a1c-4874-bf1b-54e142d543d2\") " Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.561531 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7f2729-ce28-4493-9d03-ecdbf8e37425-config\") pod \"1b7f2729-ce28-4493-9d03-ecdbf8e37425\" (UID: \"1b7f2729-ce28-4493-9d03-ecdbf8e37425\") " Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.561570 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g4p2\" (UniqueName: \"kubernetes.io/projected/1b7f2729-ce28-4493-9d03-ecdbf8e37425-kube-api-access-2g4p2\") pod \"1b7f2729-ce28-4493-9d03-ecdbf8e37425\" (UID: \"1b7f2729-ce28-4493-9d03-ecdbf8e37425\") " Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.561625 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47v7w\" (UniqueName: \"kubernetes.io/projected/9cd70907-0a1c-4874-bf1b-54e142d543d2-kube-api-access-47v7w\") pod \"9cd70907-0a1c-4874-bf1b-54e142d543d2\" (UID: \"9cd70907-0a1c-4874-bf1b-54e142d543d2\") " Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.561686 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b7f2729-ce28-4493-9d03-ecdbf8e37425-dns-svc\") pod \"1b7f2729-ce28-4493-9d03-ecdbf8e37425\" (UID: \"1b7f2729-ce28-4493-9d03-ecdbf8e37425\") " Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.562869 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b7f2729-ce28-4493-9d03-ecdbf8e37425-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1b7f2729-ce28-4493-9d03-ecdbf8e37425" (UID: "1b7f2729-ce28-4493-9d03-ecdbf8e37425"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.562882 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b7f2729-ce28-4493-9d03-ecdbf8e37425-config" (OuterVolumeSpecName: "config") pod "1b7f2729-ce28-4493-9d03-ecdbf8e37425" (UID: "1b7f2729-ce28-4493-9d03-ecdbf8e37425"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.563700 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cd70907-0a1c-4874-bf1b-54e142d543d2-config" (OuterVolumeSpecName: "config") pod "9cd70907-0a1c-4874-bf1b-54e142d543d2" (UID: "9cd70907-0a1c-4874-bf1b-54e142d543d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.663039 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7f2729-ce28-4493-9d03-ecdbf8e37425-kube-api-access-2g4p2" (OuterVolumeSpecName: "kube-api-access-2g4p2") pod "1b7f2729-ce28-4493-9d03-ecdbf8e37425" (UID: "1b7f2729-ce28-4493-9d03-ecdbf8e37425"). InnerVolumeSpecName "kube-api-access-2g4p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.663223 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cd70907-0a1c-4874-bf1b-54e142d543d2-kube-api-access-47v7w" (OuterVolumeSpecName: "kube-api-access-47v7w") pod "9cd70907-0a1c-4874-bf1b-54e142d543d2" (UID: "9cd70907-0a1c-4874-bf1b-54e142d543d2"). InnerVolumeSpecName "kube-api-access-47v7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.664210 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cd70907-0a1c-4874-bf1b-54e142d543d2-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.664241 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7f2729-ce28-4493-9d03-ecdbf8e37425-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.664251 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g4p2\" (UniqueName: \"kubernetes.io/projected/1b7f2729-ce28-4493-9d03-ecdbf8e37425-kube-api-access-2g4p2\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.664264 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47v7w\" (UniqueName: \"kubernetes.io/projected/9cd70907-0a1c-4874-bf1b-54e142d543d2-kube-api-access-47v7w\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:02 crc kubenswrapper[5043]: I1125 07:32:02.664273 5043 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b7f2729-ce28-4493-9d03-ecdbf8e37425-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:03 crc kubenswrapper[5043]: I1125 07:32:03.057006 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a22a0679-f2ea-46b8-88f5-d010717699d1","Type":"ContainerStarted","Data":"4c50afa15507489d76c34ef822b0b09486761bf72394879be214739c74be0fc2"} Nov 25 07:32:03 crc kubenswrapper[5043]: I1125 07:32:03.058899 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"610ebd16-bde0-4b4b-acf4-6d15e0324fd6","Type":"ContainerStarted","Data":"99278a6c636006056b0ed5437b1129ebcf19426fc040d58fd63161b0cc24851f"} Nov 25 07:32:03 crc kubenswrapper[5043]: I1125 07:32:03.060048 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pvbbc" event={"ID":"4bd061b9-bc56-4e7f-b7eb-d12486d15712","Type":"ContainerStarted","Data":"fd4662196ba778e720885336cb992b3943b93c72f6bb657154a517ca5ed545a3"} Nov 25 07:32:03 crc kubenswrapper[5043]: I1125 07:32:03.061457 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6584b49599-q9wmf" event={"ID":"1b7f2729-ce28-4493-9d03-ecdbf8e37425","Type":"ContainerDied","Data":"f1a5bffd8c085a2e3bd9877f7b7e02b1948980f48420bc06641068f9569b9693"} Nov 25 07:32:03 crc kubenswrapper[5043]: I1125 07:32:03.061468 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-q9wmf" Nov 25 07:32:03 crc kubenswrapper[5043]: I1125 07:32:03.062421 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdd77c89-m8pcg" event={"ID":"9cd70907-0a1c-4874-bf1b-54e142d543d2","Type":"ContainerDied","Data":"81cbae2871d8e6518faaa34abd7f380cb5f638ee3f0fa8d17cb81e063c0cdd02"} Nov 25 07:32:03 crc kubenswrapper[5043]: I1125 07:32:03.062487 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-m8pcg" Nov 25 07:32:03 crc kubenswrapper[5043]: I1125 07:32:03.107916 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-m8pcg"] Nov 25 07:32:03 crc kubenswrapper[5043]: I1125 07:32:03.114936 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-m8pcg"] Nov 25 07:32:03 crc kubenswrapper[5043]: I1125 07:32:03.151247 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-q9wmf"] Nov 25 07:32:03 crc kubenswrapper[5043]: I1125 07:32:03.156530 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-q9wmf"] Nov 25 07:32:03 crc kubenswrapper[5043]: I1125 07:32:03.301470 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 07:32:03 crc kubenswrapper[5043]: W1125 07:32:03.312233 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod568f6d22_7338_4a78_83ac_79125bd64fb9.slice/crio-7505954dce931101c0382ae3f9458be152ddce73d1aabac56d500163da80ac8a WatchSource:0}: Error finding container 7505954dce931101c0382ae3f9458be152ddce73d1aabac56d500163da80ac8a: Status 404 returned error can't find the container with id 7505954dce931101c0382ae3f9458be152ddce73d1aabac56d500163da80ac8a Nov 25 07:32:03 crc kubenswrapper[5043]: I1125 07:32:03.942545 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-s57wr"] Nov 25 07:32:04 crc kubenswrapper[5043]: I1125 07:32:04.080089 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"568f6d22-7338-4a78-83ac-79125bd64fb9","Type":"ContainerStarted","Data":"7505954dce931101c0382ae3f9458be152ddce73d1aabac56d500163da80ac8a"} Nov 25 07:32:04 crc kubenswrapper[5043]: I1125 07:32:04.083321 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s57wr" event={"ID":"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7","Type":"ContainerStarted","Data":"ba2855feca7de6a21c17fcc225047a7b2e7dfd42a6483acbc683ef6d714f488a"} Nov 25 07:32:04 crc kubenswrapper[5043]: I1125 07:32:04.973677 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b7f2729-ce28-4493-9d03-ecdbf8e37425" path="/var/lib/kubelet/pods/1b7f2729-ce28-4493-9d03-ecdbf8e37425/volumes" Nov 25 07:32:04 crc kubenswrapper[5043]: I1125 07:32:04.974191 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cd70907-0a1c-4874-bf1b-54e142d543d2" path="/var/lib/kubelet/pods/9cd70907-0a1c-4874-bf1b-54e142d543d2/volumes" Nov 25 07:32:05 crc kubenswrapper[5043]: I1125 07:32:05.091906 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d61213dd-2002-44b6-8904-21c0a754ae66","Type":"ContainerStarted","Data":"c28319fb13ea4ff76aa875432a39af443bf02c9985bfe22b166b3cfde0e83ea8"} Nov 25 07:32:05 crc kubenswrapper[5043]: I1125 07:32:05.098064 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5f4796f0-ec1b-4f62-bdad-9927841c80db","Type":"ContainerStarted","Data":"70089bdd1b0f795e87c83919ae24ce5252b461dfa8f392e8b428fab83c5a3a9b"} Nov 25 07:32:14 crc kubenswrapper[5043]: I1125 07:32:14.180687 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pvbbc" event={"ID":"4bd061b9-bc56-4e7f-b7eb-d12486d15712","Type":"ContainerStarted","Data":"a0511ce1fc214bd8ab152e5b7dcc6a62b45bbee74a468aeb81a740bda9237a52"} Nov 25 07:32:14 crc kubenswrapper[5043]: I1125 07:32:14.181309 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-pvbbc" Nov 25 07:32:14 crc kubenswrapper[5043]: I1125 07:32:14.183453 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a22a0679-f2ea-46b8-88f5-d010717699d1","Type":"ContainerStarted","Data":"504532805f60cfeafda61fef8c47fafbc73a0d7ada613d52cc1708c6691d0195"} Nov 25 07:32:14 crc kubenswrapper[5043]: I1125 07:32:14.185206 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"961b9ca6-9248-485e-9361-1e9bc78e9058","Type":"ContainerStarted","Data":"6a86ecc6c47d8d7bdd71304e1a9285ffd4d13bd2402f8f9e7c23d893fbbd6e68"} Nov 25 07:32:14 crc kubenswrapper[5043]: I1125 07:32:14.188461 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"610ebd16-bde0-4b4b-acf4-6d15e0324fd6","Type":"ContainerStarted","Data":"9e55d1091ccdcf5fe4fd3511632fb5994940e9072366745af8cde8b73b528110"} Nov 25 07:32:14 crc kubenswrapper[5043]: I1125 07:32:14.190543 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4c5266b5-29ce-457c-b2e3-bbfab352768f","Type":"ContainerStarted","Data":"8771dc0e200faac46b9859b23bc4c875acc2323edddb5ffdfee09961c0d5d9fd"} Nov 25 07:32:14 crc kubenswrapper[5043]: I1125 07:32:14.190655 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 07:32:14 crc kubenswrapper[5043]: I1125 07:32:14.192273 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"568f6d22-7338-4a78-83ac-79125bd64fb9","Type":"ContainerStarted","Data":"b1e5f274f8bc216d286bc7c840fdf5f7b1b9922e3a2730b2c1bc6af4e177d163"} Nov 25 07:32:14 crc kubenswrapper[5043]: I1125 07:32:14.194132 5043 generic.go:334] "Generic (PLEG): container finished" podID="8cb5a8c6-ad9b-4b36-8766-e67dd27797f7" containerID="7b18199d4e210bb04b8af7e7ae7602959b2bb2a0a076343c1133824465f939a2" exitCode=0 Nov 25 07:32:14 crc kubenswrapper[5043]: I1125 07:32:14.194173 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s57wr" event={"ID":"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7","Type":"ContainerDied","Data":"7b18199d4e210bb04b8af7e7ae7602959b2bb2a0a076343c1133824465f939a2"} Nov 25 07:32:14 crc kubenswrapper[5043]: I1125 07:32:14.202726 5043 generic.go:334] "Generic (PLEG): container finished" podID="7efc89bf-d04b-4b9e-a86e-3eab0f122fa9" containerID="2c9b40a6869093db3b3a5030823f467e5e94f393a707a9c8a47e9eaed3048dba" exitCode=0 Nov 25 07:32:14 crc kubenswrapper[5043]: I1125 07:32:14.202824 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" event={"ID":"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9","Type":"ContainerDied","Data":"2c9b40a6869093db3b3a5030823f467e5e94f393a707a9c8a47e9eaed3048dba"} Nov 25 07:32:14 crc kubenswrapper[5043]: I1125 07:32:14.204491 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-pvbbc" podStartSLOduration=10.309080222 podStartE2EDuration="21.204480889s" podCreationTimestamp="2025-11-25 07:31:53 +0000 UTC" firstStartedPulling="2025-11-25 07:32:02.27878289 +0000 UTC m=+986.446978611" lastFinishedPulling="2025-11-25 07:32:13.174183547 +0000 UTC m=+997.342379278" observedRunningTime="2025-11-25 07:32:14.199478205 +0000 UTC m=+998.367673926" watchObservedRunningTime="2025-11-25 07:32:14.204480889 +0000 UTC m=+998.372676610" Nov 25 07:32:14 crc kubenswrapper[5043]: I1125 07:32:14.226544 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3077d275-063c-4a4d-97bf-b1b006e32f6f","Type":"ContainerStarted","Data":"a86ffb90d3a332c305bdedb4ba9ba5bcc994c2eb540f98840497ba5a7d6fbd28"} Nov 25 07:32:14 crc kubenswrapper[5043]: I1125 07:32:14.227724 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 25 07:32:14 crc kubenswrapper[5043]: I1125 07:32:14.271306 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.912893185 podStartE2EDuration="24.27128984s" podCreationTimestamp="2025-11-25 07:31:50 +0000 UTC" firstStartedPulling="2025-11-25 07:32:02.01382336 +0000 UTC m=+986.182019081" lastFinishedPulling="2025-11-25 07:32:13.372220005 +0000 UTC m=+997.540415736" observedRunningTime="2025-11-25 07:32:14.261336094 +0000 UTC m=+998.429531825" watchObservedRunningTime="2025-11-25 07:32:14.27128984 +0000 UTC m=+998.439485561" Nov 25 07:32:14 crc kubenswrapper[5043]: I1125 07:32:14.317581 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=16.469570803 podStartE2EDuration="26.31755851s" podCreationTimestamp="2025-11-25 07:31:48 +0000 UTC" firstStartedPulling="2025-11-25 07:32:02.013539562 +0000 UTC m=+986.181735283" lastFinishedPulling="2025-11-25 07:32:11.861527269 +0000 UTC m=+996.029722990" observedRunningTime="2025-11-25 07:32:14.312388431 +0000 UTC m=+998.480584152" watchObservedRunningTime="2025-11-25 07:32:14.31755851 +0000 UTC m=+998.485754231" Nov 25 07:32:15 crc kubenswrapper[5043]: I1125 07:32:15.236953 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s57wr" event={"ID":"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7","Type":"ContainerStarted","Data":"f1f63b711947cbe7a5d63d506a36e3f615634e7e9488fac997ec0bf12001f712"} Nov 25 07:32:15 crc kubenswrapper[5043]: I1125 07:32:15.239303 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" event={"ID":"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9","Type":"ContainerStarted","Data":"dd03b2b916b285e1885d226047dbd0f17d3fb63f0527c8743d13534a75710880"} Nov 25 07:32:15 crc kubenswrapper[5043]: I1125 07:32:15.983960 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" podStartSLOduration=4.270297523 podStartE2EDuration="32.983939599s" podCreationTimestamp="2025-11-25 07:31:43 +0000 UTC" firstStartedPulling="2025-11-25 07:31:44.737226107 +0000 UTC m=+968.905421828" lastFinishedPulling="2025-11-25 07:32:13.450868173 +0000 UTC m=+997.619063904" observedRunningTime="2025-11-25 07:32:15.25747899 +0000 UTC m=+999.425674711" watchObservedRunningTime="2025-11-25 07:32:15.983939599 +0000 UTC m=+1000.152135320" Nov 25 07:32:17 crc kubenswrapper[5043]: I1125 07:32:17.258974 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"568f6d22-7338-4a78-83ac-79125bd64fb9","Type":"ContainerStarted","Data":"d265e30d223c439307f034ec9817bf53e20c99ed13060695224bc9c47ca6c1fa"} Nov 25 07:32:17 crc kubenswrapper[5043]: I1125 07:32:17.261960 5043 generic.go:334] "Generic (PLEG): container finished" podID="8a9a5bed-e2a6-4bda-a395-1f82840192a4" containerID="22b0395d941c7e19989d4e20aa47a23749b6bfc936c5bc392448013e79da8970" exitCode=0 Nov 25 07:32:17 crc kubenswrapper[5043]: I1125 07:32:17.262077 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-45dtg" event={"ID":"8a9a5bed-e2a6-4bda-a395-1f82840192a4","Type":"ContainerDied","Data":"22b0395d941c7e19989d4e20aa47a23749b6bfc936c5bc392448013e79da8970"} Nov 25 07:32:17 crc kubenswrapper[5043]: I1125 07:32:17.267453 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-s57wr" event={"ID":"8cb5a8c6-ad9b-4b36-8766-e67dd27797f7","Type":"ContainerStarted","Data":"56fc1ed244fab1806000b727b4404393b70c646e193b3925ec400abbea97d331"} Nov 25 07:32:17 crc kubenswrapper[5043]: I1125 07:32:17.267954 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:32:17 crc kubenswrapper[5043]: I1125 07:32:17.268257 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:32:17 crc kubenswrapper[5043]: I1125 07:32:17.273139 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"610ebd16-bde0-4b4b-acf4-6d15e0324fd6","Type":"ContainerStarted","Data":"479d638c11f38f9beda333406facdbadd966189c3087c1bb8e20ee8e13b9e6df"} Nov 25 07:32:17 crc kubenswrapper[5043]: I1125 07:32:17.276821 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:32:17 crc kubenswrapper[5043]: I1125 07:32:17.276889 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:32:17 crc kubenswrapper[5043]: I1125 07:32:17.276939 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:32:17 crc kubenswrapper[5043]: I1125 07:32:17.277694 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f48da9589e1ae5ed9bf24bc242ace441c8f2ff30315a460e91bdc63d89d037f"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:32:17 crc kubenswrapper[5043]: I1125 07:32:17.277806 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://6f48da9589e1ae5ed9bf24bc242ace441c8f2ff30315a460e91bdc63d89d037f" gracePeriod=600 Nov 25 07:32:17 crc kubenswrapper[5043]: I1125 07:32:17.294168 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.010893852 podStartE2EDuration="24.294141713s" podCreationTimestamp="2025-11-25 07:31:53 +0000 UTC" firstStartedPulling="2025-11-25 07:32:03.315581807 +0000 UTC m=+987.483777528" lastFinishedPulling="2025-11-25 07:32:16.598829668 +0000 UTC m=+1000.767025389" observedRunningTime="2025-11-25 07:32:17.292026576 +0000 UTC m=+1001.460222327" watchObservedRunningTime="2025-11-25 07:32:17.294141713 +0000 UTC m=+1001.462337464" Nov 25 07:32:17 crc kubenswrapper[5043]: I1125 07:32:17.334783 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=7.11124126 podStartE2EDuration="21.334760611s" podCreationTimestamp="2025-11-25 07:31:56 +0000 UTC" firstStartedPulling="2025-11-25 07:32:02.363523812 +0000 UTC m=+986.531719533" lastFinishedPulling="2025-11-25 07:32:16.587043163 +0000 UTC m=+1000.755238884" observedRunningTime="2025-11-25 07:32:17.324140706 +0000 UTC m=+1001.492336437" watchObservedRunningTime="2025-11-25 07:32:17.334760611 +0000 UTC m=+1001.502956332" Nov 25 07:32:17 crc kubenswrapper[5043]: I1125 07:32:17.381287 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-s57wr" podStartSLOduration=14.690417953 podStartE2EDuration="23.381266228s" podCreationTimestamp="2025-11-25 07:31:54 +0000 UTC" firstStartedPulling="2025-11-25 07:32:03.947657066 +0000 UTC m=+988.115852807" lastFinishedPulling="2025-11-25 07:32:12.638505361 +0000 UTC m=+996.806701082" observedRunningTime="2025-11-25 07:32:17.37503454 +0000 UTC m=+1001.543230261" watchObservedRunningTime="2025-11-25 07:32:17.381266228 +0000 UTC m=+1001.549461949" Nov 25 07:32:18 crc kubenswrapper[5043]: I1125 07:32:18.295414 5043 generic.go:334] "Generic (PLEG): container finished" podID="a22a0679-f2ea-46b8-88f5-d010717699d1" containerID="504532805f60cfeafda61fef8c47fafbc73a0d7ada613d52cc1708c6691d0195" exitCode=0 Nov 25 07:32:18 crc kubenswrapper[5043]: I1125 07:32:18.295468 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a22a0679-f2ea-46b8-88f5-d010717699d1","Type":"ContainerDied","Data":"504532805f60cfeafda61fef8c47fafbc73a0d7ada613d52cc1708c6691d0195"} Nov 25 07:32:18 crc kubenswrapper[5043]: I1125 07:32:18.300702 5043 generic.go:334] "Generic (PLEG): container finished" podID="961b9ca6-9248-485e-9361-1e9bc78e9058" containerID="6a86ecc6c47d8d7bdd71304e1a9285ffd4d13bd2402f8f9e7c23d893fbbd6e68" exitCode=0 Nov 25 07:32:18 crc kubenswrapper[5043]: I1125 07:32:18.300830 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"961b9ca6-9248-485e-9361-1e9bc78e9058","Type":"ContainerDied","Data":"6a86ecc6c47d8d7bdd71304e1a9285ffd4d13bd2402f8f9e7c23d893fbbd6e68"} Nov 25 07:32:18 crc kubenswrapper[5043]: I1125 07:32:18.305979 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-45dtg" event={"ID":"8a9a5bed-e2a6-4bda-a395-1f82840192a4","Type":"ContainerStarted","Data":"8bd46345a5c72bb71e35e890beb1e8cebdacef7c469cb1a478616755b5f07762"} Nov 25 07:32:18 crc kubenswrapper[5043]: I1125 07:32:18.306312 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6486446b9f-45dtg" Nov 25 07:32:18 crc kubenswrapper[5043]: I1125 07:32:18.318041 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="6f48da9589e1ae5ed9bf24bc242ace441c8f2ff30315a460e91bdc63d89d037f" exitCode=0 Nov 25 07:32:18 crc kubenswrapper[5043]: I1125 07:32:18.318660 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"6f48da9589e1ae5ed9bf24bc242ace441c8f2ff30315a460e91bdc63d89d037f"} Nov 25 07:32:18 crc kubenswrapper[5043]: I1125 07:32:18.319001 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"296a5a98c9bf0bfd6085b02df2f0073364b7097e82673a35c0ff9f12b1b73d01"} Nov 25 07:32:18 crc kubenswrapper[5043]: I1125 07:32:18.319207 5043 scope.go:117] "RemoveContainer" containerID="775db4b9aa6c61b7085bc9862b445a04e41b9906b056014fba7881c8d0080c48" Nov 25 07:32:18 crc kubenswrapper[5043]: I1125 07:32:18.342696 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 25 07:32:18 crc kubenswrapper[5043]: I1125 07:32:18.389957 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6486446b9f-45dtg" podStartSLOduration=-9223372002.464836 podStartE2EDuration="34.38993993s" podCreationTimestamp="2025-11-25 07:31:44 +0000 UTC" firstStartedPulling="2025-11-25 07:31:45.059908985 +0000 UTC m=+969.228104706" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:32:18.386386416 +0000 UTC m=+1002.554582147" watchObservedRunningTime="2025-11-25 07:32:18.38993993 +0000 UTC m=+1002.558135651" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.124860 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.182274 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.183938 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.256516 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.323579 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.332441 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"a22a0679-f2ea-46b8-88f5-d010717699d1","Type":"ContainerStarted","Data":"a79fe6d8a50464f7cd1fbb02f87fb3080510a9650d27b93b6558ca0e53b721e6"} Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.338201 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"961b9ca6-9248-485e-9361-1e9bc78e9058","Type":"ContainerStarted","Data":"4886606bf413065c6138f0ecbc8bf020049c810962566604a246d8341fd2aed6"} Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.338733 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.343636 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.388283 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.392502 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.400388 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=21.40416671 podStartE2EDuration="32.400369609s" podCreationTimestamp="2025-11-25 07:31:47 +0000 UTC" firstStartedPulling="2025-11-25 07:32:02.179298614 +0000 UTC m=+986.347494325" lastFinishedPulling="2025-11-25 07:32:13.175501493 +0000 UTC m=+997.343697224" observedRunningTime="2025-11-25 07:32:19.375471752 +0000 UTC m=+1003.543667473" watchObservedRunningTime="2025-11-25 07:32:19.400369609 +0000 UTC m=+1003.568565330" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.402355 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=23.809499924 podStartE2EDuration="34.402348062s" podCreationTimestamp="2025-11-25 07:31:45 +0000 UTC" firstStartedPulling="2025-11-25 07:32:02.045655573 +0000 UTC m=+986.213851304" lastFinishedPulling="2025-11-25 07:32:12.638503721 +0000 UTC m=+996.806699442" observedRunningTime="2025-11-25 07:32:19.395149319 +0000 UTC m=+1003.563345060" watchObservedRunningTime="2025-11-25 07:32:19.402348062 +0000 UTC m=+1003.570543783" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.695273 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-45dtg"] Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.718192 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-p45r4"] Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.723899 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.728427 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.737627 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-p45r4"] Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.752979 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65c78595c5-m4zfx"] Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.756750 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.760183 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.783198 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65c78595c5-m4zfx"] Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.797595 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-ovsdbserver-nb\") pod \"dnsmasq-dns-65c78595c5-m4zfx\" (UID: \"0385bfa9-1877-41c1-b279-a371fcf6d511\") " pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.797663 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-config\") pod \"dnsmasq-dns-65c78595c5-m4zfx\" (UID: \"0385bfa9-1877-41c1-b279-a371fcf6d511\") " pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.797705 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e5aec9-7403-47d2-ad0d-40765246ed38-combined-ca-bundle\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.797828 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tj2w\" (UniqueName: \"kubernetes.io/projected/0385bfa9-1877-41c1-b279-a371fcf6d511-kube-api-access-7tj2w\") pod \"dnsmasq-dns-65c78595c5-m4zfx\" (UID: \"0385bfa9-1877-41c1-b279-a371fcf6d511\") " pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.797864 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b2e5aec9-7403-47d2-ad0d-40765246ed38-ovn-rundir\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.797914 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv24r\" (UniqueName: \"kubernetes.io/projected/b2e5aec9-7403-47d2-ad0d-40765246ed38-kube-api-access-zv24r\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.797979 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e5aec9-7403-47d2-ad0d-40765246ed38-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.798033 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b2e5aec9-7403-47d2-ad0d-40765246ed38-ovs-rundir\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.798066 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-dns-svc\") pod \"dnsmasq-dns-65c78595c5-m4zfx\" (UID: \"0385bfa9-1877-41c1-b279-a371fcf6d511\") " pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.798092 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2e5aec9-7403-47d2-ad0d-40765246ed38-config\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.900064 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e5aec9-7403-47d2-ad0d-40765246ed38-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.900148 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b2e5aec9-7403-47d2-ad0d-40765246ed38-ovs-rundir\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.900235 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-dns-svc\") pod \"dnsmasq-dns-65c78595c5-m4zfx\" (UID: \"0385bfa9-1877-41c1-b279-a371fcf6d511\") " pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.900281 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2e5aec9-7403-47d2-ad0d-40765246ed38-config\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.900337 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-ovsdbserver-nb\") pod \"dnsmasq-dns-65c78595c5-m4zfx\" (UID: \"0385bfa9-1877-41c1-b279-a371fcf6d511\") " pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.900365 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-config\") pod \"dnsmasq-dns-65c78595c5-m4zfx\" (UID: \"0385bfa9-1877-41c1-b279-a371fcf6d511\") " pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.900419 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e5aec9-7403-47d2-ad0d-40765246ed38-combined-ca-bundle\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.900517 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tj2w\" (UniqueName: \"kubernetes.io/projected/0385bfa9-1877-41c1-b279-a371fcf6d511-kube-api-access-7tj2w\") pod \"dnsmasq-dns-65c78595c5-m4zfx\" (UID: \"0385bfa9-1877-41c1-b279-a371fcf6d511\") " pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.900546 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b2e5aec9-7403-47d2-ad0d-40765246ed38-ovn-rundir\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.900639 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv24r\" (UniqueName: \"kubernetes.io/projected/b2e5aec9-7403-47d2-ad0d-40765246ed38-kube-api-access-zv24r\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.902530 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b2e5aec9-7403-47d2-ad0d-40765246ed38-ovn-rundir\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.903569 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b2e5aec9-7403-47d2-ad0d-40765246ed38-ovs-rundir\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.903593 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-config\") pod \"dnsmasq-dns-65c78595c5-m4zfx\" (UID: \"0385bfa9-1877-41c1-b279-a371fcf6d511\") " pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.905290 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2e5aec9-7403-47d2-ad0d-40765246ed38-config\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.906043 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-ovsdbserver-nb\") pod \"dnsmasq-dns-65c78595c5-m4zfx\" (UID: \"0385bfa9-1877-41c1-b279-a371fcf6d511\") " pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.906301 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-dns-svc\") pod \"dnsmasq-dns-65c78595c5-m4zfx\" (UID: \"0385bfa9-1877-41c1-b279-a371fcf6d511\") " pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.914840 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2e5aec9-7403-47d2-ad0d-40765246ed38-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.915295 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e5aec9-7403-47d2-ad0d-40765246ed38-combined-ca-bundle\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.920227 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tj2w\" (UniqueName: \"kubernetes.io/projected/0385bfa9-1877-41c1-b279-a371fcf6d511-kube-api-access-7tj2w\") pod \"dnsmasq-dns-65c78595c5-m4zfx\" (UID: \"0385bfa9-1877-41c1-b279-a371fcf6d511\") " pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" Nov 25 07:32:19 crc kubenswrapper[5043]: I1125 07:32:19.921623 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv24r\" (UniqueName: \"kubernetes.io/projected/b2e5aec9-7403-47d2-ad0d-40765246ed38-kube-api-access-zv24r\") pod \"ovn-controller-metrics-p45r4\" (UID: \"b2e5aec9-7403-47d2-ad0d-40765246ed38\") " pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.039748 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65c78595c5-m4zfx"] Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.040542 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.057918 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-p45r4" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.087373 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6b5695-xndwd"] Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.088546 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.092676 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.099621 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6b5695-xndwd"] Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.217996 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6b5695-xndwd\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.218044 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfwvd\" (UniqueName: \"kubernetes.io/projected/42d37f88-26c8-40e9-9995-c91c110d3d91-kube-api-access-kfwvd\") pod \"dnsmasq-dns-5c7b6b5695-xndwd\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.218092 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6b5695-xndwd\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.218128 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-config\") pod \"dnsmasq-dns-5c7b6b5695-xndwd\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.218165 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-dns-svc\") pod \"dnsmasq-dns-5c7b6b5695-xndwd\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.318956 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-dns-svc\") pod \"dnsmasq-dns-5c7b6b5695-xndwd\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.319246 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6b5695-xndwd\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.319303 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfwvd\" (UniqueName: \"kubernetes.io/projected/42d37f88-26c8-40e9-9995-c91c110d3d91-kube-api-access-kfwvd\") pod \"dnsmasq-dns-5c7b6b5695-xndwd\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.319370 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6b5695-xndwd\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.319405 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-config\") pod \"dnsmasq-dns-5c7b6b5695-xndwd\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.320042 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-dns-svc\") pod \"dnsmasq-dns-5c7b6b5695-xndwd\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.320154 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6b5695-xndwd\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.320355 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-config\") pod \"dnsmasq-dns-5c7b6b5695-xndwd\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.320808 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6b5695-xndwd\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.337177 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfwvd\" (UniqueName: \"kubernetes.io/projected/42d37f88-26c8-40e9-9995-c91c110d3d91-kube-api-access-kfwvd\") pod \"dnsmasq-dns-5c7b6b5695-xndwd\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.355978 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6486446b9f-45dtg" podUID="8a9a5bed-e2a6-4bda-a395-1f82840192a4" containerName="dnsmasq-dns" containerID="cri-o://8bd46345a5c72bb71e35e890beb1e8cebdacef7c469cb1a478616755b5f07762" gracePeriod=10 Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.406008 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.495341 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.539677 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65c78595c5-m4zfx"] Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.551336 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.553068 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.553176 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: W1125 07:32:20.556900 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0385bfa9_1877_41c1_b279_a371fcf6d511.slice/crio-5f8e55ffa1992327d2cb329278a5937b1d9b3b46699e2231e4cabc71449f0635 WatchSource:0}: Error finding container 5f8e55ffa1992327d2cb329278a5937b1d9b3b46699e2231e4cabc71449f0635: Status 404 returned error can't find the container with id 5f8e55ffa1992327d2cb329278a5937b1d9b3b46699e2231e4cabc71449f0635 Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.557045 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-bv84z" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.557045 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.557227 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.558190 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.613421 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-p45r4"] Nov 25 07:32:20 crc kubenswrapper[5043]: W1125 07:32:20.616354 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2e5aec9_7403_47d2_ad0d_40765246ed38.slice/crio-92eca866572b229d8adb06fccf96f9c2321609bf6186b84482a50ea540745d0a WatchSource:0}: Error finding container 92eca866572b229d8adb06fccf96f9c2321609bf6186b84482a50ea540745d0a: Status 404 returned error can't find the container with id 92eca866572b229d8adb06fccf96f9c2321609bf6186b84482a50ea540745d0a Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.625484 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-scripts\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.625587 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.625704 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vvrd\" (UniqueName: \"kubernetes.io/projected/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-kube-api-access-4vvrd\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.625725 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.625747 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.625790 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.625816 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-config\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.730194 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vvrd\" (UniqueName: \"kubernetes.io/projected/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-kube-api-access-4vvrd\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.730362 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.730387 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.730425 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.730452 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-config\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.730485 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-scripts\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.730504 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.732274 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-config\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.732826 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-scripts\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.733339 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.740003 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.746503 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.750897 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vvrd\" (UniqueName: \"kubernetes.io/projected/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-kube-api-access-4vvrd\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.751854 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a6dd4c5-d75f-4622-ba5e-1da7bfebca23-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23\") " pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.845375 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-45dtg" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.880098 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.933864 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9a5bed-e2a6-4bda-a395-1f82840192a4-config\") pod \"8a9a5bed-e2a6-4bda-a395-1f82840192a4\" (UID: \"8a9a5bed-e2a6-4bda-a395-1f82840192a4\") " Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.933941 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6lrp\" (UniqueName: \"kubernetes.io/projected/8a9a5bed-e2a6-4bda-a395-1f82840192a4-kube-api-access-g6lrp\") pod \"8a9a5bed-e2a6-4bda-a395-1f82840192a4\" (UID: \"8a9a5bed-e2a6-4bda-a395-1f82840192a4\") " Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.934086 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a9a5bed-e2a6-4bda-a395-1f82840192a4-dns-svc\") pod \"8a9a5bed-e2a6-4bda-a395-1f82840192a4\" (UID: \"8a9a5bed-e2a6-4bda-a395-1f82840192a4\") " Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.938567 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a9a5bed-e2a6-4bda-a395-1f82840192a4-kube-api-access-g6lrp" (OuterVolumeSpecName: "kube-api-access-g6lrp") pod "8a9a5bed-e2a6-4bda-a395-1f82840192a4" (UID: "8a9a5bed-e2a6-4bda-a395-1f82840192a4"). InnerVolumeSpecName "kube-api-access-g6lrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.969907 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a9a5bed-e2a6-4bda-a395-1f82840192a4-config" (OuterVolumeSpecName: "config") pod "8a9a5bed-e2a6-4bda-a395-1f82840192a4" (UID: "8a9a5bed-e2a6-4bda-a395-1f82840192a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:20 crc kubenswrapper[5043]: I1125 07:32:20.969926 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a9a5bed-e2a6-4bda-a395-1f82840192a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a9a5bed-e2a6-4bda-a395-1f82840192a4" (UID: "8a9a5bed-e2a6-4bda-a395-1f82840192a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.026973 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.035952 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6lrp\" (UniqueName: \"kubernetes.io/projected/8a9a5bed-e2a6-4bda-a395-1f82840192a4-kube-api-access-g6lrp\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.037808 5043 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a9a5bed-e2a6-4bda-a395-1f82840192a4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.037826 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9a5bed-e2a6-4bda-a395-1f82840192a4-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.053070 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6b5695-xndwd"] Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.262500 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.365789 5043 generic.go:334] "Generic (PLEG): container finished" podID="0385bfa9-1877-41c1-b279-a371fcf6d511" containerID="9b65c35120476dd42cb92deb95afa5fdd58579c70eeec4c414af0a14c09dac05" exitCode=0 Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.365880 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" event={"ID":"0385bfa9-1877-41c1-b279-a371fcf6d511","Type":"ContainerDied","Data":"9b65c35120476dd42cb92deb95afa5fdd58579c70eeec4c414af0a14c09dac05"} Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.365913 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" event={"ID":"0385bfa9-1877-41c1-b279-a371fcf6d511","Type":"ContainerStarted","Data":"5f8e55ffa1992327d2cb329278a5937b1d9b3b46699e2231e4cabc71449f0635"} Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.368077 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23","Type":"ContainerStarted","Data":"ad817c98694027243792a3ab973251132f17e4b53f7144b616ead6d6faae3a95"} Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.369996 5043 generic.go:334] "Generic (PLEG): container finished" podID="42d37f88-26c8-40e9-9995-c91c110d3d91" containerID="a11022b25956f7c7f1dfbea86dd72ecd2c6609cf8347387968a701f8cabfa74f" exitCode=0 Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.370037 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" event={"ID":"42d37f88-26c8-40e9-9995-c91c110d3d91","Type":"ContainerDied","Data":"a11022b25956f7c7f1dfbea86dd72ecd2c6609cf8347387968a701f8cabfa74f"} Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.370071 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" event={"ID":"42d37f88-26c8-40e9-9995-c91c110d3d91","Type":"ContainerStarted","Data":"25503533ea393419353669c6e79df926654fbd81e319aea4f5663ecd96684d93"} Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.374493 5043 generic.go:334] "Generic (PLEG): container finished" podID="8a9a5bed-e2a6-4bda-a395-1f82840192a4" containerID="8bd46345a5c72bb71e35e890beb1e8cebdacef7c469cb1a478616755b5f07762" exitCode=0 Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.374529 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-45dtg" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.374530 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-45dtg" event={"ID":"8a9a5bed-e2a6-4bda-a395-1f82840192a4","Type":"ContainerDied","Data":"8bd46345a5c72bb71e35e890beb1e8cebdacef7c469cb1a478616755b5f07762"} Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.374714 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-45dtg" event={"ID":"8a9a5bed-e2a6-4bda-a395-1f82840192a4","Type":"ContainerDied","Data":"ff0500235f3cf6ee3030c6d9c2908e8e05e918d87fa17caf883dfaa647d10919"} Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.374765 5043 scope.go:117] "RemoveContainer" containerID="8bd46345a5c72bb71e35e890beb1e8cebdacef7c469cb1a478616755b5f07762" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.376164 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p45r4" event={"ID":"b2e5aec9-7403-47d2-ad0d-40765246ed38","Type":"ContainerStarted","Data":"ba62d65291b89be313ebb01a1d1f26ebf58e956b7a6918c9331313fdf3ddfc88"} Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.376194 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-p45r4" event={"ID":"b2e5aec9-7403-47d2-ad0d-40765246ed38","Type":"ContainerStarted","Data":"92eca866572b229d8adb06fccf96f9c2321609bf6186b84482a50ea540745d0a"} Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.403858 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-p45r4" podStartSLOduration=2.403822712 podStartE2EDuration="2.403822712s" podCreationTimestamp="2025-11-25 07:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:32:21.401745225 +0000 UTC m=+1005.569940946" watchObservedRunningTime="2025-11-25 07:32:21.403822712 +0000 UTC m=+1005.572018463" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.445885 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-45dtg"] Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.447231 5043 scope.go:117] "RemoveContainer" containerID="22b0395d941c7e19989d4e20aa47a23749b6bfc936c5bc392448013e79da8970" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.453443 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-45dtg"] Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.487393 5043 scope.go:117] "RemoveContainer" containerID="8bd46345a5c72bb71e35e890beb1e8cebdacef7c469cb1a478616755b5f07762" Nov 25 07:32:21 crc kubenswrapper[5043]: E1125 07:32:21.488093 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bd46345a5c72bb71e35e890beb1e8cebdacef7c469cb1a478616755b5f07762\": container with ID starting with 8bd46345a5c72bb71e35e890beb1e8cebdacef7c469cb1a478616755b5f07762 not found: ID does not exist" containerID="8bd46345a5c72bb71e35e890beb1e8cebdacef7c469cb1a478616755b5f07762" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.488138 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bd46345a5c72bb71e35e890beb1e8cebdacef7c469cb1a478616755b5f07762"} err="failed to get container status \"8bd46345a5c72bb71e35e890beb1e8cebdacef7c469cb1a478616755b5f07762\": rpc error: code = NotFound desc = could not find container \"8bd46345a5c72bb71e35e890beb1e8cebdacef7c469cb1a478616755b5f07762\": container with ID starting with 8bd46345a5c72bb71e35e890beb1e8cebdacef7c469cb1a478616755b5f07762 not found: ID does not exist" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.488172 5043 scope.go:117] "RemoveContainer" containerID="22b0395d941c7e19989d4e20aa47a23749b6bfc936c5bc392448013e79da8970" Nov 25 07:32:21 crc kubenswrapper[5043]: E1125 07:32:21.489315 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22b0395d941c7e19989d4e20aa47a23749b6bfc936c5bc392448013e79da8970\": container with ID starting with 22b0395d941c7e19989d4e20aa47a23749b6bfc936c5bc392448013e79da8970 not found: ID does not exist" containerID="22b0395d941c7e19989d4e20aa47a23749b6bfc936c5bc392448013e79da8970" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.489341 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22b0395d941c7e19989d4e20aa47a23749b6bfc936c5bc392448013e79da8970"} err="failed to get container status \"22b0395d941c7e19989d4e20aa47a23749b6bfc936c5bc392448013e79da8970\": rpc error: code = NotFound desc = could not find container \"22b0395d941c7e19989d4e20aa47a23749b6bfc936c5bc392448013e79da8970\": container with ID starting with 22b0395d941c7e19989d4e20aa47a23749b6bfc936c5bc392448013e79da8970 not found: ID does not exist" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.637946 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.752223 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-dns-svc\") pod \"0385bfa9-1877-41c1-b279-a371fcf6d511\" (UID: \"0385bfa9-1877-41c1-b279-a371fcf6d511\") " Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.752322 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-ovsdbserver-nb\") pod \"0385bfa9-1877-41c1-b279-a371fcf6d511\" (UID: \"0385bfa9-1877-41c1-b279-a371fcf6d511\") " Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.752856 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tj2w\" (UniqueName: \"kubernetes.io/projected/0385bfa9-1877-41c1-b279-a371fcf6d511-kube-api-access-7tj2w\") pod \"0385bfa9-1877-41c1-b279-a371fcf6d511\" (UID: \"0385bfa9-1877-41c1-b279-a371fcf6d511\") " Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.752929 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-config\") pod \"0385bfa9-1877-41c1-b279-a371fcf6d511\" (UID: \"0385bfa9-1877-41c1-b279-a371fcf6d511\") " Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.757531 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0385bfa9-1877-41c1-b279-a371fcf6d511-kube-api-access-7tj2w" (OuterVolumeSpecName: "kube-api-access-7tj2w") pod "0385bfa9-1877-41c1-b279-a371fcf6d511" (UID: "0385bfa9-1877-41c1-b279-a371fcf6d511"). InnerVolumeSpecName "kube-api-access-7tj2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.773243 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-config" (OuterVolumeSpecName: "config") pod "0385bfa9-1877-41c1-b279-a371fcf6d511" (UID: "0385bfa9-1877-41c1-b279-a371fcf6d511"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.775525 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0385bfa9-1877-41c1-b279-a371fcf6d511" (UID: "0385bfa9-1877-41c1-b279-a371fcf6d511"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.776083 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0385bfa9-1877-41c1-b279-a371fcf6d511" (UID: "0385bfa9-1877-41c1-b279-a371fcf6d511"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.854236 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.854270 5043 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.854280 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0385bfa9-1877-41c1-b279-a371fcf6d511-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:21 crc kubenswrapper[5043]: I1125 07:32:21.854291 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tj2w\" (UniqueName: \"kubernetes.io/projected/0385bfa9-1877-41c1-b279-a371fcf6d511-kube-api-access-7tj2w\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:22 crc kubenswrapper[5043]: I1125 07:32:22.383565 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" Nov 25 07:32:22 crc kubenswrapper[5043]: I1125 07:32:22.383557 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c78595c5-m4zfx" event={"ID":"0385bfa9-1877-41c1-b279-a371fcf6d511","Type":"ContainerDied","Data":"5f8e55ffa1992327d2cb329278a5937b1d9b3b46699e2231e4cabc71449f0635"} Nov 25 07:32:22 crc kubenswrapper[5043]: I1125 07:32:22.383992 5043 scope.go:117] "RemoveContainer" containerID="9b65c35120476dd42cb92deb95afa5fdd58579c70eeec4c414af0a14c09dac05" Nov 25 07:32:22 crc kubenswrapper[5043]: I1125 07:32:22.387398 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" event={"ID":"42d37f88-26c8-40e9-9995-c91c110d3d91","Type":"ContainerStarted","Data":"4b367fbc326ee5be9f2284932fc40a508303681398b3bdf53f630d560834f35a"} Nov 25 07:32:22 crc kubenswrapper[5043]: I1125 07:32:22.387531 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:22 crc kubenswrapper[5043]: E1125 07:32:22.401922 5043 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.162:39330->38.102.83.162:35317: write tcp 38.102.83.162:39330->38.102.83.162:35317: write: broken pipe Nov 25 07:32:22 crc kubenswrapper[5043]: I1125 07:32:22.409302 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" podStartSLOduration=2.4092822480000002 podStartE2EDuration="2.409282248s" podCreationTimestamp="2025-11-25 07:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:32:22.406812341 +0000 UTC m=+1006.575008062" watchObservedRunningTime="2025-11-25 07:32:22.409282248 +0000 UTC m=+1006.577477969" Nov 25 07:32:22 crc kubenswrapper[5043]: I1125 07:32:22.460118 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65c78595c5-m4zfx"] Nov 25 07:32:22 crc kubenswrapper[5043]: I1125 07:32:22.466731 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65c78595c5-m4zfx"] Nov 25 07:32:22 crc kubenswrapper[5043]: I1125 07:32:22.971329 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0385bfa9-1877-41c1-b279-a371fcf6d511" path="/var/lib/kubelet/pods/0385bfa9-1877-41c1-b279-a371fcf6d511/volumes" Nov 25 07:32:22 crc kubenswrapper[5043]: I1125 07:32:22.972423 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a9a5bed-e2a6-4bda-a395-1f82840192a4" path="/var/lib/kubelet/pods/8a9a5bed-e2a6-4bda-a395-1f82840192a4/volumes" Nov 25 07:32:23 crc kubenswrapper[5043]: I1125 07:32:23.398088 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23","Type":"ContainerStarted","Data":"d45179d3d3496137028196b60aeb731e343568e6145530a8392238300cd1a3c8"} Nov 25 07:32:23 crc kubenswrapper[5043]: I1125 07:32:23.398125 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8a6dd4c5-d75f-4622-ba5e-1da7bfebca23","Type":"ContainerStarted","Data":"af81c019f14e15c4cab5e8172a7035843b81cae43b6f5b4d260eebbbf3f010e4"} Nov 25 07:32:23 crc kubenswrapper[5043]: I1125 07:32:23.399974 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 25 07:32:23 crc kubenswrapper[5043]: I1125 07:32:23.430299 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.846775562 podStartE2EDuration="3.43028033s" podCreationTimestamp="2025-11-25 07:32:20 +0000 UTC" firstStartedPulling="2025-11-25 07:32:21.273735455 +0000 UTC m=+1005.441931176" lastFinishedPulling="2025-11-25 07:32:22.857240223 +0000 UTC m=+1007.025435944" observedRunningTime="2025-11-25 07:32:23.422012549 +0000 UTC m=+1007.590208290" watchObservedRunningTime="2025-11-25 07:32:23.43028033 +0000 UTC m=+1007.598476051" Nov 25 07:32:27 crc kubenswrapper[5043]: I1125 07:32:27.270858 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 07:32:27 crc kubenswrapper[5043]: I1125 07:32:27.271688 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 25 07:32:27 crc kubenswrapper[5043]: I1125 07:32:27.387575 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 25 07:32:27 crc kubenswrapper[5043]: I1125 07:32:27.546797 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.668503 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.669147 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.766010 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.889340 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-7hjzt"] Nov 25 07:32:28 crc kubenswrapper[5043]: E1125 07:32:28.889805 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a9a5bed-e2a6-4bda-a395-1f82840192a4" containerName="init" Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.889828 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a9a5bed-e2a6-4bda-a395-1f82840192a4" containerName="init" Nov 25 07:32:28 crc kubenswrapper[5043]: E1125 07:32:28.889857 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0385bfa9-1877-41c1-b279-a371fcf6d511" containerName="init" Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.889866 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="0385bfa9-1877-41c1-b279-a371fcf6d511" containerName="init" Nov 25 07:32:28 crc kubenswrapper[5043]: E1125 07:32:28.889899 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a9a5bed-e2a6-4bda-a395-1f82840192a4" containerName="dnsmasq-dns" Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.889908 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a9a5bed-e2a6-4bda-a395-1f82840192a4" containerName="dnsmasq-dns" Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.890121 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="0385bfa9-1877-41c1-b279-a371fcf6d511" containerName="init" Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.890140 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a9a5bed-e2a6-4bda-a395-1f82840192a4" containerName="dnsmasq-dns" Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.890774 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7hjzt" Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.898171 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1e41-account-create-kbqzd"] Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.900000 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e41-account-create-kbqzd" Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.902749 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.912996 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1e41-account-create-kbqzd"] Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.922190 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7hjzt"] Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.968171 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d287967c-61b6-4dc4-bb3f-91e576e6d0c7-operator-scripts\") pod \"keystone-1e41-account-create-kbqzd\" (UID: \"d287967c-61b6-4dc4-bb3f-91e576e6d0c7\") " pod="openstack/keystone-1e41-account-create-kbqzd" Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.968222 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/370681cd-6bb3-47a4-939e-c705ee3814bd-operator-scripts\") pod \"keystone-db-create-7hjzt\" (UID: \"370681cd-6bb3-47a4-939e-c705ee3814bd\") " pod="openstack/keystone-db-create-7hjzt" Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.968583 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6rjj\" (UniqueName: \"kubernetes.io/projected/370681cd-6bb3-47a4-939e-c705ee3814bd-kube-api-access-h6rjj\") pod \"keystone-db-create-7hjzt\" (UID: \"370681cd-6bb3-47a4-939e-c705ee3814bd\") " pod="openstack/keystone-db-create-7hjzt" Nov 25 07:32:28 crc kubenswrapper[5043]: I1125 07:32:28.968670 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm7n4\" (UniqueName: \"kubernetes.io/projected/d287967c-61b6-4dc4-bb3f-91e576e6d0c7-kube-api-access-dm7n4\") pod \"keystone-1e41-account-create-kbqzd\" (UID: \"d287967c-61b6-4dc4-bb3f-91e576e6d0c7\") " pod="openstack/keystone-1e41-account-create-kbqzd" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.071563 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d287967c-61b6-4dc4-bb3f-91e576e6d0c7-operator-scripts\") pod \"keystone-1e41-account-create-kbqzd\" (UID: \"d287967c-61b6-4dc4-bb3f-91e576e6d0c7\") " pod="openstack/keystone-1e41-account-create-kbqzd" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.071645 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/370681cd-6bb3-47a4-939e-c705ee3814bd-operator-scripts\") pod \"keystone-db-create-7hjzt\" (UID: \"370681cd-6bb3-47a4-939e-c705ee3814bd\") " pod="openstack/keystone-db-create-7hjzt" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.071732 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6rjj\" (UniqueName: \"kubernetes.io/projected/370681cd-6bb3-47a4-939e-c705ee3814bd-kube-api-access-h6rjj\") pod \"keystone-db-create-7hjzt\" (UID: \"370681cd-6bb3-47a4-939e-c705ee3814bd\") " pod="openstack/keystone-db-create-7hjzt" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.071756 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm7n4\" (UniqueName: \"kubernetes.io/projected/d287967c-61b6-4dc4-bb3f-91e576e6d0c7-kube-api-access-dm7n4\") pod \"keystone-1e41-account-create-kbqzd\" (UID: \"d287967c-61b6-4dc4-bb3f-91e576e6d0c7\") " pod="openstack/keystone-1e41-account-create-kbqzd" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.072889 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/370681cd-6bb3-47a4-939e-c705ee3814bd-operator-scripts\") pod \"keystone-db-create-7hjzt\" (UID: \"370681cd-6bb3-47a4-939e-c705ee3814bd\") " pod="openstack/keystone-db-create-7hjzt" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.073032 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d287967c-61b6-4dc4-bb3f-91e576e6d0c7-operator-scripts\") pod \"keystone-1e41-account-create-kbqzd\" (UID: \"d287967c-61b6-4dc4-bb3f-91e576e6d0c7\") " pod="openstack/keystone-1e41-account-create-kbqzd" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.073394 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-sstg4"] Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.075098 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sstg4" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.082007 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-sstg4"] Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.114519 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6rjj\" (UniqueName: \"kubernetes.io/projected/370681cd-6bb3-47a4-939e-c705ee3814bd-kube-api-access-h6rjj\") pod \"keystone-db-create-7hjzt\" (UID: \"370681cd-6bb3-47a4-939e-c705ee3814bd\") " pod="openstack/keystone-db-create-7hjzt" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.117261 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm7n4\" (UniqueName: \"kubernetes.io/projected/d287967c-61b6-4dc4-bb3f-91e576e6d0c7-kube-api-access-dm7n4\") pod \"keystone-1e41-account-create-kbqzd\" (UID: \"d287967c-61b6-4dc4-bb3f-91e576e6d0c7\") " pod="openstack/keystone-1e41-account-create-kbqzd" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.121108 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d9cc-account-create-2wd58"] Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.122127 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d9cc-account-create-2wd58" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.124197 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.134954 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d9cc-account-create-2wd58"] Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.172804 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf28ec77-4f7c-43a6-9bab-6ff49979b68d-operator-scripts\") pod \"placement-db-create-sstg4\" (UID: \"bf28ec77-4f7c-43a6-9bab-6ff49979b68d\") " pod="openstack/placement-db-create-sstg4" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.172936 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg6lb\" (UniqueName: \"kubernetes.io/projected/bf28ec77-4f7c-43a6-9bab-6ff49979b68d-kube-api-access-qg6lb\") pod \"placement-db-create-sstg4\" (UID: \"bf28ec77-4f7c-43a6-9bab-6ff49979b68d\") " pod="openstack/placement-db-create-sstg4" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.233389 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7hjzt" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.246619 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e41-account-create-kbqzd" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.274204 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9-operator-scripts\") pod \"placement-d9cc-account-create-2wd58\" (UID: \"b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9\") " pod="openstack/placement-d9cc-account-create-2wd58" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.274530 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgthw\" (UniqueName: \"kubernetes.io/projected/b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9-kube-api-access-hgthw\") pod \"placement-d9cc-account-create-2wd58\" (UID: \"b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9\") " pod="openstack/placement-d9cc-account-create-2wd58" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.274925 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf28ec77-4f7c-43a6-9bab-6ff49979b68d-operator-scripts\") pod \"placement-db-create-sstg4\" (UID: \"bf28ec77-4f7c-43a6-9bab-6ff49979b68d\") " pod="openstack/placement-db-create-sstg4" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.275113 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg6lb\" (UniqueName: \"kubernetes.io/projected/bf28ec77-4f7c-43a6-9bab-6ff49979b68d-kube-api-access-qg6lb\") pod \"placement-db-create-sstg4\" (UID: \"bf28ec77-4f7c-43a6-9bab-6ff49979b68d\") " pod="openstack/placement-db-create-sstg4" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.275956 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf28ec77-4f7c-43a6-9bab-6ff49979b68d-operator-scripts\") pod \"placement-db-create-sstg4\" (UID: \"bf28ec77-4f7c-43a6-9bab-6ff49979b68d\") " pod="openstack/placement-db-create-sstg4" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.304765 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg6lb\" (UniqueName: \"kubernetes.io/projected/bf28ec77-4f7c-43a6-9bab-6ff49979b68d-kube-api-access-qg6lb\") pod \"placement-db-create-sstg4\" (UID: \"bf28ec77-4f7c-43a6-9bab-6ff49979b68d\") " pod="openstack/placement-db-create-sstg4" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.338767 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-qghlr"] Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.339898 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qghlr" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.362163 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qghlr"] Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.377344 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9-operator-scripts\") pod \"placement-d9cc-account-create-2wd58\" (UID: \"b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9\") " pod="openstack/placement-d9cc-account-create-2wd58" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.377416 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgthw\" (UniqueName: \"kubernetes.io/projected/b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9-kube-api-access-hgthw\") pod \"placement-d9cc-account-create-2wd58\" (UID: \"b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9\") " pod="openstack/placement-d9cc-account-create-2wd58" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.378324 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9-operator-scripts\") pod \"placement-d9cc-account-create-2wd58\" (UID: \"b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9\") " pod="openstack/placement-d9cc-account-create-2wd58" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.398421 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgthw\" (UniqueName: \"kubernetes.io/projected/b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9-kube-api-access-hgthw\") pod \"placement-d9cc-account-create-2wd58\" (UID: \"b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9\") " pod="openstack/placement-d9cc-account-create-2wd58" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.402872 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sstg4" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.437559 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2c5e-account-create-qsdcj"] Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.438539 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2c5e-account-create-qsdcj" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.442884 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.444069 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2c5e-account-create-qsdcj"] Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.473336 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d9cc-account-create-2wd58" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.478894 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff376c2e-dfad-443a-8ad3-b5a1cd40cd12-operator-scripts\") pod \"glance-db-create-qghlr\" (UID: \"ff376c2e-dfad-443a-8ad3-b5a1cd40cd12\") " pod="openstack/glance-db-create-qghlr" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.479067 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssg7l\" (UniqueName: \"kubernetes.io/projected/ff376c2e-dfad-443a-8ad3-b5a1cd40cd12-kube-api-access-ssg7l\") pod \"glance-db-create-qghlr\" (UID: \"ff376c2e-dfad-443a-8ad3-b5a1cd40cd12\") " pod="openstack/glance-db-create-qghlr" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.563976 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.580166 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssg7l\" (UniqueName: \"kubernetes.io/projected/ff376c2e-dfad-443a-8ad3-b5a1cd40cd12-kube-api-access-ssg7l\") pod \"glance-db-create-qghlr\" (UID: \"ff376c2e-dfad-443a-8ad3-b5a1cd40cd12\") " pod="openstack/glance-db-create-qghlr" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.580217 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzpvf\" (UniqueName: \"kubernetes.io/projected/e7b5fe88-9221-486b-8686-bee5da9fcbf9-kube-api-access-wzpvf\") pod \"glance-2c5e-account-create-qsdcj\" (UID: \"e7b5fe88-9221-486b-8686-bee5da9fcbf9\") " pod="openstack/glance-2c5e-account-create-qsdcj" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.580269 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff376c2e-dfad-443a-8ad3-b5a1cd40cd12-operator-scripts\") pod \"glance-db-create-qghlr\" (UID: \"ff376c2e-dfad-443a-8ad3-b5a1cd40cd12\") " pod="openstack/glance-db-create-qghlr" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.580346 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7b5fe88-9221-486b-8686-bee5da9fcbf9-operator-scripts\") pod \"glance-2c5e-account-create-qsdcj\" (UID: \"e7b5fe88-9221-486b-8686-bee5da9fcbf9\") " pod="openstack/glance-2c5e-account-create-qsdcj" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.582268 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff376c2e-dfad-443a-8ad3-b5a1cd40cd12-operator-scripts\") pod \"glance-db-create-qghlr\" (UID: \"ff376c2e-dfad-443a-8ad3-b5a1cd40cd12\") " pod="openstack/glance-db-create-qghlr" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.607858 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssg7l\" (UniqueName: \"kubernetes.io/projected/ff376c2e-dfad-443a-8ad3-b5a1cd40cd12-kube-api-access-ssg7l\") pod \"glance-db-create-qghlr\" (UID: \"ff376c2e-dfad-443a-8ad3-b5a1cd40cd12\") " pod="openstack/glance-db-create-qghlr" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.610216 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1e41-account-create-kbqzd"] Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.675731 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qghlr" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.681840 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7b5fe88-9221-486b-8686-bee5da9fcbf9-operator-scripts\") pod \"glance-2c5e-account-create-qsdcj\" (UID: \"e7b5fe88-9221-486b-8686-bee5da9fcbf9\") " pod="openstack/glance-2c5e-account-create-qsdcj" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.681944 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzpvf\" (UniqueName: \"kubernetes.io/projected/e7b5fe88-9221-486b-8686-bee5da9fcbf9-kube-api-access-wzpvf\") pod \"glance-2c5e-account-create-qsdcj\" (UID: \"e7b5fe88-9221-486b-8686-bee5da9fcbf9\") " pod="openstack/glance-2c5e-account-create-qsdcj" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.682842 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7b5fe88-9221-486b-8686-bee5da9fcbf9-operator-scripts\") pod \"glance-2c5e-account-create-qsdcj\" (UID: \"e7b5fe88-9221-486b-8686-bee5da9fcbf9\") " pod="openstack/glance-2c5e-account-create-qsdcj" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.710077 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7hjzt"] Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.714629 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzpvf\" (UniqueName: \"kubernetes.io/projected/e7b5fe88-9221-486b-8686-bee5da9fcbf9-kube-api-access-wzpvf\") pod \"glance-2c5e-account-create-qsdcj\" (UID: \"e7b5fe88-9221-486b-8686-bee5da9fcbf9\") " pod="openstack/glance-2c5e-account-create-qsdcj" Nov 25 07:32:29 crc kubenswrapper[5043]: W1125 07:32:29.717912 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod370681cd_6bb3_47a4_939e_c705ee3814bd.slice/crio-1b0eac5bb746afc99625a6b0676d69af7e234fcc8f30a49a392d10a6daecd4b9 WatchSource:0}: Error finding container 1b0eac5bb746afc99625a6b0676d69af7e234fcc8f30a49a392d10a6daecd4b9: Status 404 returned error can't find the container with id 1b0eac5bb746afc99625a6b0676d69af7e234fcc8f30a49a392d10a6daecd4b9 Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.758417 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2c5e-account-create-qsdcj" Nov 25 07:32:29 crc kubenswrapper[5043]: I1125 07:32:29.927455 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-sstg4"] Nov 25 07:32:29 crc kubenswrapper[5043]: W1125 07:32:29.945272 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf28ec77_4f7c_43a6_9bab_6ff49979b68d.slice/crio-5bd73d458a06c905713e0e42f1d8708d7d6054b26b91b4fd5b0ebcd727895449 WatchSource:0}: Error finding container 5bd73d458a06c905713e0e42f1d8708d7d6054b26b91b4fd5b0ebcd727895449: Status 404 returned error can't find the container with id 5bd73d458a06c905713e0e42f1d8708d7d6054b26b91b4fd5b0ebcd727895449 Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.022649 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d9cc-account-create-2wd58"] Nov 25 07:32:30 crc kubenswrapper[5043]: W1125 07:32:30.030861 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2ffa878_ab91_4ab1_bcfd_834dc8ebe5c9.slice/crio-9b1816fdb75a3b8c67d8a2718eb83bced4ef1a12585f01729d6b0cf968a35bd3 WatchSource:0}: Error finding container 9b1816fdb75a3b8c67d8a2718eb83bced4ef1a12585f01729d6b0cf968a35bd3: Status 404 returned error can't find the container with id 9b1816fdb75a3b8c67d8a2718eb83bced4ef1a12585f01729d6b0cf968a35bd3 Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.112389 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qghlr"] Nov 25 07:32:30 crc kubenswrapper[5043]: W1125 07:32:30.161009 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff376c2e_dfad_443a_8ad3_b5a1cd40cd12.slice/crio-6846b4aef5826911c0be71998bd5c303e4fa9570756a383d2c563240fd0c7691 WatchSource:0}: Error finding container 6846b4aef5826911c0be71998bd5c303e4fa9570756a383d2c563240fd0c7691: Status 404 returned error can't find the container with id 6846b4aef5826911c0be71998bd5c303e4fa9570756a383d2c563240fd0c7691 Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.214894 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2c5e-account-create-qsdcj"] Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.481836 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qghlr" event={"ID":"ff376c2e-dfad-443a-8ad3-b5a1cd40cd12","Type":"ContainerStarted","Data":"6846b4aef5826911c0be71998bd5c303e4fa9570756a383d2c563240fd0c7691"} Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.484423 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d9cc-account-create-2wd58" event={"ID":"b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9","Type":"ContainerStarted","Data":"9b1816fdb75a3b8c67d8a2718eb83bced4ef1a12585f01729d6b0cf968a35bd3"} Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.487674 5043 generic.go:334] "Generic (PLEG): container finished" podID="d287967c-61b6-4dc4-bb3f-91e576e6d0c7" containerID="89969ff8c95dbefacf3c62f031a014f7bd98d73f114c23521e79d41c4338d474" exitCode=0 Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.488012 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e41-account-create-kbqzd" event={"ID":"d287967c-61b6-4dc4-bb3f-91e576e6d0c7","Type":"ContainerDied","Data":"89969ff8c95dbefacf3c62f031a014f7bd98d73f114c23521e79d41c4338d474"} Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.488041 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e41-account-create-kbqzd" event={"ID":"d287967c-61b6-4dc4-bb3f-91e576e6d0c7","Type":"ContainerStarted","Data":"1ba889330c4d160aea96dafecc16b6bfd69861fd069e850b30685decc8d23a9f"} Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.489932 5043 generic.go:334] "Generic (PLEG): container finished" podID="370681cd-6bb3-47a4-939e-c705ee3814bd" containerID="b850dac5c228560d4c5ba5d1ef5dd96840fff2caebd9886e50d1408a7c0a9fc6" exitCode=0 Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.490011 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7hjzt" event={"ID":"370681cd-6bb3-47a4-939e-c705ee3814bd","Type":"ContainerDied","Data":"b850dac5c228560d4c5ba5d1ef5dd96840fff2caebd9886e50d1408a7c0a9fc6"} Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.490026 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7hjzt" event={"ID":"370681cd-6bb3-47a4-939e-c705ee3814bd","Type":"ContainerStarted","Data":"1b0eac5bb746afc99625a6b0676d69af7e234fcc8f30a49a392d10a6daecd4b9"} Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.496800 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.499452 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2c5e-account-create-qsdcj" event={"ID":"e7b5fe88-9221-486b-8686-bee5da9fcbf9","Type":"ContainerStarted","Data":"a013bf312a45f6242b354c8c41f70158472439e111d37661577bb0e5539aa0e0"} Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.500980 5043 generic.go:334] "Generic (PLEG): container finished" podID="bf28ec77-4f7c-43a6-9bab-6ff49979b68d" containerID="22c7c3385d9878c7706d6d212d89fc74be251621e3195aea0dd427eafc1cbb2f" exitCode=0 Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.501238 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sstg4" event={"ID":"bf28ec77-4f7c-43a6-9bab-6ff49979b68d","Type":"ContainerDied","Data":"22c7c3385d9878c7706d6d212d89fc74be251621e3195aea0dd427eafc1cbb2f"} Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.501279 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sstg4" event={"ID":"bf28ec77-4f7c-43a6-9bab-6ff49979b68d","Type":"ContainerStarted","Data":"5bd73d458a06c905713e0e42f1d8708d7d6054b26b91b4fd5b0ebcd727895449"} Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.571917 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-2c5e-account-create-qsdcj" podStartSLOduration=1.571605018 podStartE2EDuration="1.571605018s" podCreationTimestamp="2025-11-25 07:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:32:30.549031592 +0000 UTC m=+1014.717227333" watchObservedRunningTime="2025-11-25 07:32:30.571605018 +0000 UTC m=+1014.739800729" Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.626428 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-vggdj"] Nov 25 07:32:30 crc kubenswrapper[5043]: I1125 07:32:30.626649 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" podUID="7efc89bf-d04b-4b9e-a86e-3eab0f122fa9" containerName="dnsmasq-dns" containerID="cri-o://dd03b2b916b285e1885d226047dbd0f17d3fb63f0527c8743d13534a75710880" gracePeriod=10 Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.011955 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.115552 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-dns-svc\") pod \"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9\" (UID: \"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9\") " Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.116058 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtj7f\" (UniqueName: \"kubernetes.io/projected/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-kube-api-access-rtj7f\") pod \"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9\" (UID: \"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9\") " Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.116136 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-config\") pod \"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9\" (UID: \"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9\") " Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.122319 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-kube-api-access-rtj7f" (OuterVolumeSpecName: "kube-api-access-rtj7f") pod "7efc89bf-d04b-4b9e-a86e-3eab0f122fa9" (UID: "7efc89bf-d04b-4b9e-a86e-3eab0f122fa9"). InnerVolumeSpecName "kube-api-access-rtj7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.153830 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7efc89bf-d04b-4b9e-a86e-3eab0f122fa9" (UID: "7efc89bf-d04b-4b9e-a86e-3eab0f122fa9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.154497 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-config" (OuterVolumeSpecName: "config") pod "7efc89bf-d04b-4b9e-a86e-3eab0f122fa9" (UID: "7efc89bf-d04b-4b9e-a86e-3eab0f122fa9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.227200 5043 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.227251 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtj7f\" (UniqueName: \"kubernetes.io/projected/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-kube-api-access-rtj7f\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.227271 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.510001 5043 generic.go:334] "Generic (PLEG): container finished" podID="ff376c2e-dfad-443a-8ad3-b5a1cd40cd12" containerID="5ea72c00f03900258dab3e55d88807719ee15883e1f86a0c006c1d843fe502f4" exitCode=0 Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.510061 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qghlr" event={"ID":"ff376c2e-dfad-443a-8ad3-b5a1cd40cd12","Type":"ContainerDied","Data":"5ea72c00f03900258dab3e55d88807719ee15883e1f86a0c006c1d843fe502f4"} Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.512365 5043 generic.go:334] "Generic (PLEG): container finished" podID="b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9" containerID="a24e9d5e9d1de1a57b02c78028508d23930ba166376297d367ad797a8a8069f5" exitCode=0 Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.512470 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d9cc-account-create-2wd58" event={"ID":"b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9","Type":"ContainerDied","Data":"a24e9d5e9d1de1a57b02c78028508d23930ba166376297d367ad797a8a8069f5"} Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.514999 5043 generic.go:334] "Generic (PLEG): container finished" podID="7efc89bf-d04b-4b9e-a86e-3eab0f122fa9" containerID="dd03b2b916b285e1885d226047dbd0f17d3fb63f0527c8743d13534a75710880" exitCode=0 Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.515045 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" event={"ID":"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9","Type":"ContainerDied","Data":"dd03b2b916b285e1885d226047dbd0f17d3fb63f0527c8743d13534a75710880"} Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.515182 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.515439 5043 scope.go:117] "RemoveContainer" containerID="dd03b2b916b285e1885d226047dbd0f17d3fb63f0527c8743d13534a75710880" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.515419 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-vggdj" event={"ID":"7efc89bf-d04b-4b9e-a86e-3eab0f122fa9","Type":"ContainerDied","Data":"bbc53ae2a47512565f66510040db2a02b6f0afa8250612778ef0820cdeb36660"} Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.517200 5043 generic.go:334] "Generic (PLEG): container finished" podID="e7b5fe88-9221-486b-8686-bee5da9fcbf9" containerID="67579140ac2601af62780d644497d4586aff0d8b69751f84a7a0ba627326f598" exitCode=0 Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.517393 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2c5e-account-create-qsdcj" event={"ID":"e7b5fe88-9221-486b-8686-bee5da9fcbf9","Type":"ContainerDied","Data":"67579140ac2601af62780d644497d4586aff0d8b69751f84a7a0ba627326f598"} Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.552158 5043 scope.go:117] "RemoveContainer" containerID="2c9b40a6869093db3b3a5030823f467e5e94f393a707a9c8a47e9eaed3048dba" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.576317 5043 scope.go:117] "RemoveContainer" containerID="dd03b2b916b285e1885d226047dbd0f17d3fb63f0527c8743d13534a75710880" Nov 25 07:32:31 crc kubenswrapper[5043]: E1125 07:32:31.578042 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd03b2b916b285e1885d226047dbd0f17d3fb63f0527c8743d13534a75710880\": container with ID starting with dd03b2b916b285e1885d226047dbd0f17d3fb63f0527c8743d13534a75710880 not found: ID does not exist" containerID="dd03b2b916b285e1885d226047dbd0f17d3fb63f0527c8743d13534a75710880" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.578090 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd03b2b916b285e1885d226047dbd0f17d3fb63f0527c8743d13534a75710880"} err="failed to get container status \"dd03b2b916b285e1885d226047dbd0f17d3fb63f0527c8743d13534a75710880\": rpc error: code = NotFound desc = could not find container \"dd03b2b916b285e1885d226047dbd0f17d3fb63f0527c8743d13534a75710880\": container with ID starting with dd03b2b916b285e1885d226047dbd0f17d3fb63f0527c8743d13534a75710880 not found: ID does not exist" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.578123 5043 scope.go:117] "RemoveContainer" containerID="2c9b40a6869093db3b3a5030823f467e5e94f393a707a9c8a47e9eaed3048dba" Nov 25 07:32:31 crc kubenswrapper[5043]: E1125 07:32:31.585113 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c9b40a6869093db3b3a5030823f467e5e94f393a707a9c8a47e9eaed3048dba\": container with ID starting with 2c9b40a6869093db3b3a5030823f467e5e94f393a707a9c8a47e9eaed3048dba not found: ID does not exist" containerID="2c9b40a6869093db3b3a5030823f467e5e94f393a707a9c8a47e9eaed3048dba" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.585163 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c9b40a6869093db3b3a5030823f467e5e94f393a707a9c8a47e9eaed3048dba"} err="failed to get container status \"2c9b40a6869093db3b3a5030823f467e5e94f393a707a9c8a47e9eaed3048dba\": rpc error: code = NotFound desc = could not find container \"2c9b40a6869093db3b3a5030823f467e5e94f393a707a9c8a47e9eaed3048dba\": container with ID starting with 2c9b40a6869093db3b3a5030823f467e5e94f393a707a9c8a47e9eaed3048dba not found: ID does not exist" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.615299 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-vggdj"] Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.625542 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-vggdj"] Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.895649 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7hjzt" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.937852 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sstg4" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.946100 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/370681cd-6bb3-47a4-939e-c705ee3814bd-operator-scripts\") pod \"370681cd-6bb3-47a4-939e-c705ee3814bd\" (UID: \"370681cd-6bb3-47a4-939e-c705ee3814bd\") " Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.946212 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6rjj\" (UniqueName: \"kubernetes.io/projected/370681cd-6bb3-47a4-939e-c705ee3814bd-kube-api-access-h6rjj\") pod \"370681cd-6bb3-47a4-939e-c705ee3814bd\" (UID: \"370681cd-6bb3-47a4-939e-c705ee3814bd\") " Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.946585 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/370681cd-6bb3-47a4-939e-c705ee3814bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "370681cd-6bb3-47a4-939e-c705ee3814bd" (UID: "370681cd-6bb3-47a4-939e-c705ee3814bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.949084 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e41-account-create-kbqzd" Nov 25 07:32:31 crc kubenswrapper[5043]: I1125 07:32:31.953129 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/370681cd-6bb3-47a4-939e-c705ee3814bd-kube-api-access-h6rjj" (OuterVolumeSpecName: "kube-api-access-h6rjj") pod "370681cd-6bb3-47a4-939e-c705ee3814bd" (UID: "370681cd-6bb3-47a4-939e-c705ee3814bd"). InnerVolumeSpecName "kube-api-access-h6rjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.047136 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg6lb\" (UniqueName: \"kubernetes.io/projected/bf28ec77-4f7c-43a6-9bab-6ff49979b68d-kube-api-access-qg6lb\") pod \"bf28ec77-4f7c-43a6-9bab-6ff49979b68d\" (UID: \"bf28ec77-4f7c-43a6-9bab-6ff49979b68d\") " Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.047274 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d287967c-61b6-4dc4-bb3f-91e576e6d0c7-operator-scripts\") pod \"d287967c-61b6-4dc4-bb3f-91e576e6d0c7\" (UID: \"d287967c-61b6-4dc4-bb3f-91e576e6d0c7\") " Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.047315 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm7n4\" (UniqueName: \"kubernetes.io/projected/d287967c-61b6-4dc4-bb3f-91e576e6d0c7-kube-api-access-dm7n4\") pod \"d287967c-61b6-4dc4-bb3f-91e576e6d0c7\" (UID: \"d287967c-61b6-4dc4-bb3f-91e576e6d0c7\") " Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.047411 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf28ec77-4f7c-43a6-9bab-6ff49979b68d-operator-scripts\") pod \"bf28ec77-4f7c-43a6-9bab-6ff49979b68d\" (UID: \"bf28ec77-4f7c-43a6-9bab-6ff49979b68d\") " Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.047867 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6rjj\" (UniqueName: \"kubernetes.io/projected/370681cd-6bb3-47a4-939e-c705ee3814bd-kube-api-access-h6rjj\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.047889 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/370681cd-6bb3-47a4-939e-c705ee3814bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.047882 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d287967c-61b6-4dc4-bb3f-91e576e6d0c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d287967c-61b6-4dc4-bb3f-91e576e6d0c7" (UID: "d287967c-61b6-4dc4-bb3f-91e576e6d0c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.048320 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf28ec77-4f7c-43a6-9bab-6ff49979b68d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf28ec77-4f7c-43a6-9bab-6ff49979b68d" (UID: "bf28ec77-4f7c-43a6-9bab-6ff49979b68d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.050464 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d287967c-61b6-4dc4-bb3f-91e576e6d0c7-kube-api-access-dm7n4" (OuterVolumeSpecName: "kube-api-access-dm7n4") pod "d287967c-61b6-4dc4-bb3f-91e576e6d0c7" (UID: "d287967c-61b6-4dc4-bb3f-91e576e6d0c7"). InnerVolumeSpecName "kube-api-access-dm7n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.052858 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf28ec77-4f7c-43a6-9bab-6ff49979b68d-kube-api-access-qg6lb" (OuterVolumeSpecName: "kube-api-access-qg6lb") pod "bf28ec77-4f7c-43a6-9bab-6ff49979b68d" (UID: "bf28ec77-4f7c-43a6-9bab-6ff49979b68d"). InnerVolumeSpecName "kube-api-access-qg6lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.149668 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg6lb\" (UniqueName: \"kubernetes.io/projected/bf28ec77-4f7c-43a6-9bab-6ff49979b68d-kube-api-access-qg6lb\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.149704 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d287967c-61b6-4dc4-bb3f-91e576e6d0c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.149717 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm7n4\" (UniqueName: \"kubernetes.io/projected/d287967c-61b6-4dc4-bb3f-91e576e6d0c7-kube-api-access-dm7n4\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.149729 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf28ec77-4f7c-43a6-9bab-6ff49979b68d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.531693 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e41-account-create-kbqzd" event={"ID":"d287967c-61b6-4dc4-bb3f-91e576e6d0c7","Type":"ContainerDied","Data":"1ba889330c4d160aea96dafecc16b6bfd69861fd069e850b30685decc8d23a9f"} Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.531759 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e41-account-create-kbqzd" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.531767 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ba889330c4d160aea96dafecc16b6bfd69861fd069e850b30685decc8d23a9f" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.534576 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7hjzt" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.534579 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7hjzt" event={"ID":"370681cd-6bb3-47a4-939e-c705ee3814bd","Type":"ContainerDied","Data":"1b0eac5bb746afc99625a6b0676d69af7e234fcc8f30a49a392d10a6daecd4b9"} Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.535036 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b0eac5bb746afc99625a6b0676d69af7e234fcc8f30a49a392d10a6daecd4b9" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.539375 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sstg4" event={"ID":"bf28ec77-4f7c-43a6-9bab-6ff49979b68d","Type":"ContainerDied","Data":"5bd73d458a06c905713e0e42f1d8708d7d6054b26b91b4fd5b0ebcd727895449"} Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.539422 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bd73d458a06c905713e0e42f1d8708d7d6054b26b91b4fd5b0ebcd727895449" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.539433 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sstg4" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.937097 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2c5e-account-create-qsdcj" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.943574 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d9cc-account-create-2wd58" Nov 25 07:32:32 crc kubenswrapper[5043]: I1125 07:32:32.952581 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qghlr" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.014647 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7efc89bf-d04b-4b9e-a86e-3eab0f122fa9" path="/var/lib/kubelet/pods/7efc89bf-d04b-4b9e-a86e-3eab0f122fa9/volumes" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.067566 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7b5fe88-9221-486b-8686-bee5da9fcbf9-operator-scripts\") pod \"e7b5fe88-9221-486b-8686-bee5da9fcbf9\" (UID: \"e7b5fe88-9221-486b-8686-bee5da9fcbf9\") " Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.068175 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7b5fe88-9221-486b-8686-bee5da9fcbf9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7b5fe88-9221-486b-8686-bee5da9fcbf9" (UID: "e7b5fe88-9221-486b-8686-bee5da9fcbf9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.068675 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9-operator-scripts\") pod \"b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9\" (UID: \"b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9\") " Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.068709 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff376c2e-dfad-443a-8ad3-b5a1cd40cd12-operator-scripts\") pod \"ff376c2e-dfad-443a-8ad3-b5a1cd40cd12\" (UID: \"ff376c2e-dfad-443a-8ad3-b5a1cd40cd12\") " Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.068807 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzpvf\" (UniqueName: \"kubernetes.io/projected/e7b5fe88-9221-486b-8686-bee5da9fcbf9-kube-api-access-wzpvf\") pod \"e7b5fe88-9221-486b-8686-bee5da9fcbf9\" (UID: \"e7b5fe88-9221-486b-8686-bee5da9fcbf9\") " Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.069072 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssg7l\" (UniqueName: \"kubernetes.io/projected/ff376c2e-dfad-443a-8ad3-b5a1cd40cd12-kube-api-access-ssg7l\") pod \"ff376c2e-dfad-443a-8ad3-b5a1cd40cd12\" (UID: \"ff376c2e-dfad-443a-8ad3-b5a1cd40cd12\") " Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.069130 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgthw\" (UniqueName: \"kubernetes.io/projected/b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9-kube-api-access-hgthw\") pod \"b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9\" (UID: \"b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9\") " Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.069300 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9" (UID: "b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.069778 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7b5fe88-9221-486b-8686-bee5da9fcbf9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.069796 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.070300 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff376c2e-dfad-443a-8ad3-b5a1cd40cd12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff376c2e-dfad-443a-8ad3-b5a1cd40cd12" (UID: "ff376c2e-dfad-443a-8ad3-b5a1cd40cd12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.072087 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7b5fe88-9221-486b-8686-bee5da9fcbf9-kube-api-access-wzpvf" (OuterVolumeSpecName: "kube-api-access-wzpvf") pod "e7b5fe88-9221-486b-8686-bee5da9fcbf9" (UID: "e7b5fe88-9221-486b-8686-bee5da9fcbf9"). InnerVolumeSpecName "kube-api-access-wzpvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.074326 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff376c2e-dfad-443a-8ad3-b5a1cd40cd12-kube-api-access-ssg7l" (OuterVolumeSpecName: "kube-api-access-ssg7l") pod "ff376c2e-dfad-443a-8ad3-b5a1cd40cd12" (UID: "ff376c2e-dfad-443a-8ad3-b5a1cd40cd12"). InnerVolumeSpecName "kube-api-access-ssg7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.074986 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9-kube-api-access-hgthw" (OuterVolumeSpecName: "kube-api-access-hgthw") pod "b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9" (UID: "b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9"). InnerVolumeSpecName "kube-api-access-hgthw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.171018 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssg7l\" (UniqueName: \"kubernetes.io/projected/ff376c2e-dfad-443a-8ad3-b5a1cd40cd12-kube-api-access-ssg7l\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.171050 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgthw\" (UniqueName: \"kubernetes.io/projected/b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9-kube-api-access-hgthw\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.171060 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff376c2e-dfad-443a-8ad3-b5a1cd40cd12-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.171070 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzpvf\" (UniqueName: \"kubernetes.io/projected/e7b5fe88-9221-486b-8686-bee5da9fcbf9-kube-api-access-wzpvf\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.555004 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2c5e-account-create-qsdcj" event={"ID":"e7b5fe88-9221-486b-8686-bee5da9fcbf9","Type":"ContainerDied","Data":"a013bf312a45f6242b354c8c41f70158472439e111d37661577bb0e5539aa0e0"} Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.555098 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a013bf312a45f6242b354c8c41f70158472439e111d37661577bb0e5539aa0e0" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.555543 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2c5e-account-create-qsdcj" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.557719 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d9cc-account-create-2wd58" event={"ID":"b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9","Type":"ContainerDied","Data":"9b1816fdb75a3b8c67d8a2718eb83bced4ef1a12585f01729d6b0cf968a35bd3"} Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.557766 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b1816fdb75a3b8c67d8a2718eb83bced4ef1a12585f01729d6b0cf968a35bd3" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.557865 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d9cc-account-create-2wd58" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.562114 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qghlr" event={"ID":"ff376c2e-dfad-443a-8ad3-b5a1cd40cd12","Type":"ContainerDied","Data":"6846b4aef5826911c0be71998bd5c303e4fa9570756a383d2c563240fd0c7691"} Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.562182 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6846b4aef5826911c0be71998bd5c303e4fa9570756a383d2c563240fd0c7691" Nov 25 07:32:33 crc kubenswrapper[5043]: I1125 07:32:33.562216 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qghlr" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.642738 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-cn67d"] Nov 25 07:32:34 crc kubenswrapper[5043]: E1125 07:32:34.643246 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="370681cd-6bb3-47a4-939e-c705ee3814bd" containerName="mariadb-database-create" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.643258 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="370681cd-6bb3-47a4-939e-c705ee3814bd" containerName="mariadb-database-create" Nov 25 07:32:34 crc kubenswrapper[5043]: E1125 07:32:34.643267 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efc89bf-d04b-4b9e-a86e-3eab0f122fa9" containerName="init" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.643273 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efc89bf-d04b-4b9e-a86e-3eab0f122fa9" containerName="init" Nov 25 07:32:34 crc kubenswrapper[5043]: E1125 07:32:34.643285 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d287967c-61b6-4dc4-bb3f-91e576e6d0c7" containerName="mariadb-account-create" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.643290 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="d287967c-61b6-4dc4-bb3f-91e576e6d0c7" containerName="mariadb-account-create" Nov 25 07:32:34 crc kubenswrapper[5043]: E1125 07:32:34.643304 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9" containerName="mariadb-account-create" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.643309 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9" containerName="mariadb-account-create" Nov 25 07:32:34 crc kubenswrapper[5043]: E1125 07:32:34.643320 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efc89bf-d04b-4b9e-a86e-3eab0f122fa9" containerName="dnsmasq-dns" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.643325 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efc89bf-d04b-4b9e-a86e-3eab0f122fa9" containerName="dnsmasq-dns" Nov 25 07:32:34 crc kubenswrapper[5043]: E1125 07:32:34.643337 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf28ec77-4f7c-43a6-9bab-6ff49979b68d" containerName="mariadb-database-create" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.643342 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf28ec77-4f7c-43a6-9bab-6ff49979b68d" containerName="mariadb-database-create" Nov 25 07:32:34 crc kubenswrapper[5043]: E1125 07:32:34.643354 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff376c2e-dfad-443a-8ad3-b5a1cd40cd12" containerName="mariadb-database-create" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.643360 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff376c2e-dfad-443a-8ad3-b5a1cd40cd12" containerName="mariadb-database-create" Nov 25 07:32:34 crc kubenswrapper[5043]: E1125 07:32:34.643379 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b5fe88-9221-486b-8686-bee5da9fcbf9" containerName="mariadb-account-create" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.643384 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b5fe88-9221-486b-8686-bee5da9fcbf9" containerName="mariadb-account-create" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.643517 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="7efc89bf-d04b-4b9e-a86e-3eab0f122fa9" containerName="dnsmasq-dns" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.643531 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9" containerName="mariadb-account-create" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.643538 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b5fe88-9221-486b-8686-bee5da9fcbf9" containerName="mariadb-account-create" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.643550 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="d287967c-61b6-4dc4-bb3f-91e576e6d0c7" containerName="mariadb-account-create" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.643560 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf28ec77-4f7c-43a6-9bab-6ff49979b68d" containerName="mariadb-database-create" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.643569 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff376c2e-dfad-443a-8ad3-b5a1cd40cd12" containerName="mariadb-database-create" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.643578 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="370681cd-6bb3-47a4-939e-c705ee3814bd" containerName="mariadb-database-create" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.644041 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cn67d" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.646290 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.646535 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vvnvv" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.655028 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cn67d"] Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.698438 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd6zq\" (UniqueName: \"kubernetes.io/projected/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-kube-api-access-bd6zq\") pod \"glance-db-sync-cn67d\" (UID: \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\") " pod="openstack/glance-db-sync-cn67d" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.698501 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-config-data\") pod \"glance-db-sync-cn67d\" (UID: \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\") " pod="openstack/glance-db-sync-cn67d" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.698538 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-db-sync-config-data\") pod \"glance-db-sync-cn67d\" (UID: \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\") " pod="openstack/glance-db-sync-cn67d" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.698638 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-combined-ca-bundle\") pod \"glance-db-sync-cn67d\" (UID: \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\") " pod="openstack/glance-db-sync-cn67d" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.801000 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-db-sync-config-data\") pod \"glance-db-sync-cn67d\" (UID: \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\") " pod="openstack/glance-db-sync-cn67d" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.801089 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-combined-ca-bundle\") pod \"glance-db-sync-cn67d\" (UID: \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\") " pod="openstack/glance-db-sync-cn67d" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.801196 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd6zq\" (UniqueName: \"kubernetes.io/projected/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-kube-api-access-bd6zq\") pod \"glance-db-sync-cn67d\" (UID: \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\") " pod="openstack/glance-db-sync-cn67d" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.801277 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-config-data\") pod \"glance-db-sync-cn67d\" (UID: \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\") " pod="openstack/glance-db-sync-cn67d" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.806870 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-db-sync-config-data\") pod \"glance-db-sync-cn67d\" (UID: \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\") " pod="openstack/glance-db-sync-cn67d" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.808896 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-config-data\") pod \"glance-db-sync-cn67d\" (UID: \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\") " pod="openstack/glance-db-sync-cn67d" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.819790 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-combined-ca-bundle\") pod \"glance-db-sync-cn67d\" (UID: \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\") " pod="openstack/glance-db-sync-cn67d" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.856493 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd6zq\" (UniqueName: \"kubernetes.io/projected/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-kube-api-access-bd6zq\") pod \"glance-db-sync-cn67d\" (UID: \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\") " pod="openstack/glance-db-sync-cn67d" Nov 25 07:32:34 crc kubenswrapper[5043]: I1125 07:32:34.962713 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cn67d" Nov 25 07:32:35 crc kubenswrapper[5043]: I1125 07:32:35.531159 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cn67d"] Nov 25 07:32:35 crc kubenswrapper[5043]: W1125 07:32:35.536978 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a15aa1c_a8ff_46d3_9893_a3ea429171b8.slice/crio-c9e0be70bb9a449588a008f6842b7213930470640b82845920519ef013be22fe WatchSource:0}: Error finding container c9e0be70bb9a449588a008f6842b7213930470640b82845920519ef013be22fe: Status 404 returned error can't find the container with id c9e0be70bb9a449588a008f6842b7213930470640b82845920519ef013be22fe Nov 25 07:32:35 crc kubenswrapper[5043]: I1125 07:32:35.577206 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cn67d" event={"ID":"9a15aa1c-a8ff-46d3-9893-a3ea429171b8","Type":"ContainerStarted","Data":"c9e0be70bb9a449588a008f6842b7213930470640b82845920519ef013be22fe"} Nov 25 07:32:35 crc kubenswrapper[5043]: I1125 07:32:35.947715 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 25 07:32:37 crc kubenswrapper[5043]: I1125 07:32:37.597817 5043 generic.go:334] "Generic (PLEG): container finished" podID="5f4796f0-ec1b-4f62-bdad-9927841c80db" containerID="70089bdd1b0f795e87c83919ae24ce5252b461dfa8f392e8b428fab83c5a3a9b" exitCode=0 Nov 25 07:32:37 crc kubenswrapper[5043]: I1125 07:32:37.597915 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5f4796f0-ec1b-4f62-bdad-9927841c80db","Type":"ContainerDied","Data":"70089bdd1b0f795e87c83919ae24ce5252b461dfa8f392e8b428fab83c5a3a9b"} Nov 25 07:32:37 crc kubenswrapper[5043]: I1125 07:32:37.602512 5043 generic.go:334] "Generic (PLEG): container finished" podID="d61213dd-2002-44b6-8904-21c0a754ae66" containerID="c28319fb13ea4ff76aa875432a39af443bf02c9985bfe22b166b3cfde0e83ea8" exitCode=0 Nov 25 07:32:37 crc kubenswrapper[5043]: I1125 07:32:37.602553 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d61213dd-2002-44b6-8904-21c0a754ae66","Type":"ContainerDied","Data":"c28319fb13ea4ff76aa875432a39af443bf02c9985bfe22b166b3cfde0e83ea8"} Nov 25 07:32:38 crc kubenswrapper[5043]: I1125 07:32:38.614352 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5f4796f0-ec1b-4f62-bdad-9927841c80db","Type":"ContainerStarted","Data":"f5f36721d34be995bcec3b093c487cabebec92fb14219129f9b74345b3956dcf"} Nov 25 07:32:38 crc kubenswrapper[5043]: I1125 07:32:38.614913 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 07:32:38 crc kubenswrapper[5043]: I1125 07:32:38.616516 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d61213dd-2002-44b6-8904-21c0a754ae66","Type":"ContainerStarted","Data":"c86d1981443974107f4b4f467115364e3967b5166cbd2571544390fa87322973"} Nov 25 07:32:38 crc kubenswrapper[5043]: I1125 07:32:38.617162 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:32:38 crc kubenswrapper[5043]: I1125 07:32:38.663347 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=44.095551559 podStartE2EDuration="55.663328396s" podCreationTimestamp="2025-11-25 07:31:43 +0000 UTC" firstStartedPulling="2025-11-25 07:31:49.95712278 +0000 UTC m=+974.125318501" lastFinishedPulling="2025-11-25 07:32:01.524899627 +0000 UTC m=+985.693095338" observedRunningTime="2025-11-25 07:32:38.640887714 +0000 UTC m=+1022.809083435" watchObservedRunningTime="2025-11-25 07:32:38.663328396 +0000 UTC m=+1022.831524127" Nov 25 07:32:38 crc kubenswrapper[5043]: I1125 07:32:38.664699 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=54.664691252 podStartE2EDuration="54.664691252s" podCreationTimestamp="2025-11-25 07:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:32:38.66050831 +0000 UTC m=+1022.828704031" watchObservedRunningTime="2025-11-25 07:32:38.664691252 +0000 UTC m=+1022.832886983" Nov 25 07:32:44 crc kubenswrapper[5043]: I1125 07:32:44.390776 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-pvbbc" podUID="4bd061b9-bc56-4e7f-b7eb-d12486d15712" containerName="ovn-controller" probeResult="failure" output=< Nov 25 07:32:44 crc kubenswrapper[5043]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 07:32:44 crc kubenswrapper[5043]: > Nov 25 07:32:48 crc kubenswrapper[5043]: I1125 07:32:48.697198 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cn67d" event={"ID":"9a15aa1c-a8ff-46d3-9893-a3ea429171b8","Type":"ContainerStarted","Data":"161d74bdd037f2ba4bba94bafd8f7d94e2ce7189ed535577f904e7f347e9a7e9"} Nov 25 07:32:48 crc kubenswrapper[5043]: I1125 07:32:48.721575 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-cn67d" podStartSLOduration=2.345953319 podStartE2EDuration="14.721554065s" podCreationTimestamp="2025-11-25 07:32:34 +0000 UTC" firstStartedPulling="2025-11-25 07:32:35.539727403 +0000 UTC m=+1019.707923134" lastFinishedPulling="2025-11-25 07:32:47.915328159 +0000 UTC m=+1032.083523880" observedRunningTime="2025-11-25 07:32:48.712596295 +0000 UTC m=+1032.880792026" watchObservedRunningTime="2025-11-25 07:32:48.721554065 +0000 UTC m=+1032.889749796" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.375847 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-pvbbc" podUID="4bd061b9-bc56-4e7f-b7eb-d12486d15712" containerName="ovn-controller" probeResult="failure" output=< Nov 25 07:32:49 crc kubenswrapper[5043]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 07:32:49 crc kubenswrapper[5043]: > Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.389309 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.396038 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-s57wr" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.616761 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pvbbc-config-8hc9f"] Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.618020 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.620067 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.624067 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pvbbc-config-8hc9f"] Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.742880 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqhw2\" (UniqueName: \"kubernetes.io/projected/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-kube-api-access-wqhw2\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.742971 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-scripts\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.743096 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-additional-scripts\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.743131 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-run-ovn\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.743209 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-log-ovn\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.743251 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-run\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.844910 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-additional-scripts\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.844971 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-run-ovn\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.845013 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-log-ovn\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.845308 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-run-ovn\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.845320 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-log-ovn\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.845612 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-run\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.845651 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqhw2\" (UniqueName: \"kubernetes.io/projected/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-kube-api-access-wqhw2\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.845737 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-run\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.845742 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-scripts\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.846535 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-additional-scripts\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.847787 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-scripts\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.872767 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqhw2\" (UniqueName: \"kubernetes.io/projected/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-kube-api-access-wqhw2\") pod \"ovn-controller-pvbbc-config-8hc9f\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:49 crc kubenswrapper[5043]: I1125 07:32:49.973864 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:50 crc kubenswrapper[5043]: I1125 07:32:50.423099 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pvbbc-config-8hc9f"] Nov 25 07:32:50 crc kubenswrapper[5043]: W1125 07:32:50.424895 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d4c2ddd_43ad_4f2f_87fa_57ccf82c032e.slice/crio-65d52525bde134cce086b1fd5b6f1530523dddaa50976c0c03b8be508846c2fa WatchSource:0}: Error finding container 65d52525bde134cce086b1fd5b6f1530523dddaa50976c0c03b8be508846c2fa: Status 404 returned error can't find the container with id 65d52525bde134cce086b1fd5b6f1530523dddaa50976c0c03b8be508846c2fa Nov 25 07:32:50 crc kubenswrapper[5043]: I1125 07:32:50.729100 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pvbbc-config-8hc9f" event={"ID":"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e","Type":"ContainerStarted","Data":"c4c40f4c8dde9cb0d8fd804208661b84ab8884edfc7b454a6e7e3aa8f428a91f"} Nov 25 07:32:50 crc kubenswrapper[5043]: I1125 07:32:50.729673 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pvbbc-config-8hc9f" event={"ID":"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e","Type":"ContainerStarted","Data":"65d52525bde134cce086b1fd5b6f1530523dddaa50976c0c03b8be508846c2fa"} Nov 25 07:32:50 crc kubenswrapper[5043]: I1125 07:32:50.757773 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-pvbbc-config-8hc9f" podStartSLOduration=1.757756106 podStartE2EDuration="1.757756106s" podCreationTimestamp="2025-11-25 07:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:32:50.752213367 +0000 UTC m=+1034.920409138" watchObservedRunningTime="2025-11-25 07:32:50.757756106 +0000 UTC m=+1034.925951827" Nov 25 07:32:51 crc kubenswrapper[5043]: E1125 07:32:51.205663 5043 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d4c2ddd_43ad_4f2f_87fa_57ccf82c032e.slice/crio-conmon-c4c40f4c8dde9cb0d8fd804208661b84ab8884edfc7b454a6e7e3aa8f428a91f.scope\": RecentStats: unable to find data in memory cache]" Nov 25 07:32:51 crc kubenswrapper[5043]: I1125 07:32:51.739542 5043 generic.go:334] "Generic (PLEG): container finished" podID="9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e" containerID="c4c40f4c8dde9cb0d8fd804208661b84ab8884edfc7b454a6e7e3aa8f428a91f" exitCode=0 Nov 25 07:32:51 crc kubenswrapper[5043]: I1125 07:32:51.739592 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pvbbc-config-8hc9f" event={"ID":"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e","Type":"ContainerDied","Data":"c4c40f4c8dde9cb0d8fd804208661b84ab8884edfc7b454a6e7e3aa8f428a91f"} Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.117283 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.206005 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-run-ovn\") pod \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.206418 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqhw2\" (UniqueName: \"kubernetes.io/projected/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-kube-api-access-wqhw2\") pod \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.206475 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-log-ovn\") pod \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.206510 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-scripts\") pod \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.206529 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-run\") pod \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.206576 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-additional-scripts\") pod \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\" (UID: \"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e\") " Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.207884 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e" (UID: "9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.207933 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e" (UID: "9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.208578 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e" (UID: "9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.208633 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-run" (OuterVolumeSpecName: "var-run") pod "9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e" (UID: "9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.210071 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-scripts" (OuterVolumeSpecName: "scripts") pod "9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e" (UID: "9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.213749 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-kube-api-access-wqhw2" (OuterVolumeSpecName: "kube-api-access-wqhw2") pod "9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e" (UID: "9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e"). InnerVolumeSpecName "kube-api-access-wqhw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.309346 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqhw2\" (UniqueName: \"kubernetes.io/projected/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-kube-api-access-wqhw2\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.309402 5043 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.309423 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.309440 5043 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-run\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.309461 5043 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.309482 5043 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.762979 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pvbbc-config-8hc9f" event={"ID":"9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e","Type":"ContainerDied","Data":"65d52525bde134cce086b1fd5b6f1530523dddaa50976c0c03b8be508846c2fa"} Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.763026 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65d52525bde134cce086b1fd5b6f1530523dddaa50976c0c03b8be508846c2fa" Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.763094 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pvbbc-config-8hc9f" Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.861863 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pvbbc-config-8hc9f"] Nov 25 07:32:53 crc kubenswrapper[5043]: I1125 07:32:53.893114 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-pvbbc-config-8hc9f"] Nov 25 07:32:54 crc kubenswrapper[5043]: I1125 07:32:54.368259 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-pvbbc" Nov 25 07:32:54 crc kubenswrapper[5043]: I1125 07:32:54.982460 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e" path="/var/lib/kubelet/pods/9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e/volumes" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.358948 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.645731 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.690680 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-szw8b"] Nov 25 07:32:55 crc kubenswrapper[5043]: E1125 07:32:55.691077 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e" containerName="ovn-config" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.691118 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e" containerName="ovn-config" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.691326 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d4c2ddd-43ad-4f2f-87fa-57ccf82c032e" containerName="ovn-config" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.700020 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-szw8b"] Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.700126 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-szw8b" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.782540 5043 generic.go:334] "Generic (PLEG): container finished" podID="9a15aa1c-a8ff-46d3-9893-a3ea429171b8" containerID="161d74bdd037f2ba4bba94bafd8f7d94e2ce7189ed535577f904e7f347e9a7e9" exitCode=0 Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.782676 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cn67d" event={"ID":"9a15aa1c-a8ff-46d3-9893-a3ea429171b8","Type":"ContainerDied","Data":"161d74bdd037f2ba4bba94bafd8f7d94e2ce7189ed535577f904e7f347e9a7e9"} Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.784768 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-4xbtr"] Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.786095 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4xbtr" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.793383 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9a38-account-create-4ldc5"] Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.794374 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9a38-account-create-4ldc5" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.796670 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.803747 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4xbtr"] Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.821685 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9a38-account-create-4ldc5"] Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.849913 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda179e9-563d-426d-b4a9-1aca3f47acfe-operator-scripts\") pod \"cinder-db-create-szw8b\" (UID: \"eda179e9-563d-426d-b4a9-1aca3f47acfe\") " pod="openstack/cinder-db-create-szw8b" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.850011 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2859a00-00bb-4358-a60e-083415c768e1-operator-scripts\") pod \"barbican-db-create-4xbtr\" (UID: \"a2859a00-00bb-4358-a60e-083415c768e1\") " pod="openstack/barbican-db-create-4xbtr" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.850073 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkxsm\" (UniqueName: \"kubernetes.io/projected/eda179e9-563d-426d-b4a9-1aca3f47acfe-kube-api-access-dkxsm\") pod \"cinder-db-create-szw8b\" (UID: \"eda179e9-563d-426d-b4a9-1aca3f47acfe\") " pod="openstack/cinder-db-create-szw8b" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.850106 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzwmp\" (UniqueName: \"kubernetes.io/projected/a2859a00-00bb-4358-a60e-083415c768e1-kube-api-access-wzwmp\") pod \"barbican-db-create-4xbtr\" (UID: \"a2859a00-00bb-4358-a60e-083415c768e1\") " pod="openstack/barbican-db-create-4xbtr" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.895382 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-f62d-account-create-24psn"] Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.896349 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f62d-account-create-24psn" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.901086 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.917286 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f62d-account-create-24psn"] Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.951110 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8nbs\" (UniqueName: \"kubernetes.io/projected/8b63c108-4aa7-49c8-a12c-51554851c41e-kube-api-access-s8nbs\") pod \"cinder-9a38-account-create-4ldc5\" (UID: \"8b63c108-4aa7-49c8-a12c-51554851c41e\") " pod="openstack/cinder-9a38-account-create-4ldc5" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.951793 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2859a00-00bb-4358-a60e-083415c768e1-operator-scripts\") pod \"barbican-db-create-4xbtr\" (UID: \"a2859a00-00bb-4358-a60e-083415c768e1\") " pod="openstack/barbican-db-create-4xbtr" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.951936 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d943615-5ac6-450f-aeec-baa4d0833e9b-operator-scripts\") pod \"barbican-f62d-account-create-24psn\" (UID: \"4d943615-5ac6-450f-aeec-baa4d0833e9b\") " pod="openstack/barbican-f62d-account-create-24psn" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.952115 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkxsm\" (UniqueName: \"kubernetes.io/projected/eda179e9-563d-426d-b4a9-1aca3f47acfe-kube-api-access-dkxsm\") pod \"cinder-db-create-szw8b\" (UID: \"eda179e9-563d-426d-b4a9-1aca3f47acfe\") " pod="openstack/cinder-db-create-szw8b" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.952233 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzwmp\" (UniqueName: \"kubernetes.io/projected/a2859a00-00bb-4358-a60e-083415c768e1-kube-api-access-wzwmp\") pod \"barbican-db-create-4xbtr\" (UID: \"a2859a00-00bb-4358-a60e-083415c768e1\") " pod="openstack/barbican-db-create-4xbtr" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.952342 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b63c108-4aa7-49c8-a12c-51554851c41e-operator-scripts\") pod \"cinder-9a38-account-create-4ldc5\" (UID: \"8b63c108-4aa7-49c8-a12c-51554851c41e\") " pod="openstack/cinder-9a38-account-create-4ldc5" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.952489 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-298f7\" (UniqueName: \"kubernetes.io/projected/4d943615-5ac6-450f-aeec-baa4d0833e9b-kube-api-access-298f7\") pod \"barbican-f62d-account-create-24psn\" (UID: \"4d943615-5ac6-450f-aeec-baa4d0833e9b\") " pod="openstack/barbican-f62d-account-create-24psn" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.952591 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda179e9-563d-426d-b4a9-1aca3f47acfe-operator-scripts\") pod \"cinder-db-create-szw8b\" (UID: \"eda179e9-563d-426d-b4a9-1aca3f47acfe\") " pod="openstack/cinder-db-create-szw8b" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.952687 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2859a00-00bb-4358-a60e-083415c768e1-operator-scripts\") pod \"barbican-db-create-4xbtr\" (UID: \"a2859a00-00bb-4358-a60e-083415c768e1\") " pod="openstack/barbican-db-create-4xbtr" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.953488 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda179e9-563d-426d-b4a9-1aca3f47acfe-operator-scripts\") pod \"cinder-db-create-szw8b\" (UID: \"eda179e9-563d-426d-b4a9-1aca3f47acfe\") " pod="openstack/cinder-db-create-szw8b" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.977725 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-mwtqz"] Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.978688 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mwtqz" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.980329 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkxsm\" (UniqueName: \"kubernetes.io/projected/eda179e9-563d-426d-b4a9-1aca3f47acfe-kube-api-access-dkxsm\") pod \"cinder-db-create-szw8b\" (UID: \"eda179e9-563d-426d-b4a9-1aca3f47acfe\") " pod="openstack/cinder-db-create-szw8b" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.987078 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzwmp\" (UniqueName: \"kubernetes.io/projected/a2859a00-00bb-4358-a60e-083415c768e1-kube-api-access-wzwmp\") pod \"barbican-db-create-4xbtr\" (UID: \"a2859a00-00bb-4358-a60e-083415c768e1\") " pod="openstack/barbican-db-create-4xbtr" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.990113 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-cdl2v"] Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.991260 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cdl2v" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.994050 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xt29v" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.994118 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.994123 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 07:32:55 crc kubenswrapper[5043]: I1125 07:32:55.994285 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.004254 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mwtqz"] Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.011292 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-cdl2v"] Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.039116 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-szw8b" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.054698 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-298f7\" (UniqueName: \"kubernetes.io/projected/4d943615-5ac6-450f-aeec-baa4d0833e9b-kube-api-access-298f7\") pod \"barbican-f62d-account-create-24psn\" (UID: \"4d943615-5ac6-450f-aeec-baa4d0833e9b\") " pod="openstack/barbican-f62d-account-create-24psn" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.054755 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb045c8-8071-479e-ae53-287767fb69b9-combined-ca-bundle\") pod \"keystone-db-sync-cdl2v\" (UID: \"1eb045c8-8071-479e-ae53-287767fb69b9\") " pod="openstack/keystone-db-sync-cdl2v" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.054802 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c9d3ac-a1e3-4354-b3b2-31bc32818a60-operator-scripts\") pod \"neutron-db-create-mwtqz\" (UID: \"27c9d3ac-a1e3-4354-b3b2-31bc32818a60\") " pod="openstack/neutron-db-create-mwtqz" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.054887 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb045c8-8071-479e-ae53-287767fb69b9-config-data\") pod \"keystone-db-sync-cdl2v\" (UID: \"1eb045c8-8071-479e-ae53-287767fb69b9\") " pod="openstack/keystone-db-sync-cdl2v" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.054913 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8nbs\" (UniqueName: \"kubernetes.io/projected/8b63c108-4aa7-49c8-a12c-51554851c41e-kube-api-access-s8nbs\") pod \"cinder-9a38-account-create-4ldc5\" (UID: \"8b63c108-4aa7-49c8-a12c-51554851c41e\") " pod="openstack/cinder-9a38-account-create-4ldc5" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.054963 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d943615-5ac6-450f-aeec-baa4d0833e9b-operator-scripts\") pod \"barbican-f62d-account-create-24psn\" (UID: \"4d943615-5ac6-450f-aeec-baa4d0833e9b\") " pod="openstack/barbican-f62d-account-create-24psn" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.054987 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gzg5\" (UniqueName: \"kubernetes.io/projected/27c9d3ac-a1e3-4354-b3b2-31bc32818a60-kube-api-access-9gzg5\") pod \"neutron-db-create-mwtqz\" (UID: \"27c9d3ac-a1e3-4354-b3b2-31bc32818a60\") " pod="openstack/neutron-db-create-mwtqz" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.055064 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b63c108-4aa7-49c8-a12c-51554851c41e-operator-scripts\") pod \"cinder-9a38-account-create-4ldc5\" (UID: \"8b63c108-4aa7-49c8-a12c-51554851c41e\") " pod="openstack/cinder-9a38-account-create-4ldc5" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.055095 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf8rr\" (UniqueName: \"kubernetes.io/projected/1eb045c8-8071-479e-ae53-287767fb69b9-kube-api-access-wf8rr\") pod \"keystone-db-sync-cdl2v\" (UID: \"1eb045c8-8071-479e-ae53-287767fb69b9\") " pod="openstack/keystone-db-sync-cdl2v" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.058115 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b63c108-4aa7-49c8-a12c-51554851c41e-operator-scripts\") pod \"cinder-9a38-account-create-4ldc5\" (UID: \"8b63c108-4aa7-49c8-a12c-51554851c41e\") " pod="openstack/cinder-9a38-account-create-4ldc5" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.058731 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d943615-5ac6-450f-aeec-baa4d0833e9b-operator-scripts\") pod \"barbican-f62d-account-create-24psn\" (UID: \"4d943615-5ac6-450f-aeec-baa4d0833e9b\") " pod="openstack/barbican-f62d-account-create-24psn" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.075313 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8nbs\" (UniqueName: \"kubernetes.io/projected/8b63c108-4aa7-49c8-a12c-51554851c41e-kube-api-access-s8nbs\") pod \"cinder-9a38-account-create-4ldc5\" (UID: \"8b63c108-4aa7-49c8-a12c-51554851c41e\") " pod="openstack/cinder-9a38-account-create-4ldc5" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.096695 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-58b8-account-create-95g6x"] Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.100958 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-298f7\" (UniqueName: \"kubernetes.io/projected/4d943615-5ac6-450f-aeec-baa4d0833e9b-kube-api-access-298f7\") pod \"barbican-f62d-account-create-24psn\" (UID: \"4d943615-5ac6-450f-aeec-baa4d0833e9b\") " pod="openstack/barbican-f62d-account-create-24psn" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.102619 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58b8-account-create-95g6x" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.110082 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.118196 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4xbtr" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.127578 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9a38-account-create-4ldc5" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.138746 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58b8-account-create-95g6x"] Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.163947 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb045c8-8071-479e-ae53-287767fb69b9-combined-ca-bundle\") pod \"keystone-db-sync-cdl2v\" (UID: \"1eb045c8-8071-479e-ae53-287767fb69b9\") " pod="openstack/keystone-db-sync-cdl2v" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.164013 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c9d3ac-a1e3-4354-b3b2-31bc32818a60-operator-scripts\") pod \"neutron-db-create-mwtqz\" (UID: \"27c9d3ac-a1e3-4354-b3b2-31bc32818a60\") " pod="openstack/neutron-db-create-mwtqz" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.164057 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb045c8-8071-479e-ae53-287767fb69b9-config-data\") pod \"keystone-db-sync-cdl2v\" (UID: \"1eb045c8-8071-479e-ae53-287767fb69b9\") " pod="openstack/keystone-db-sync-cdl2v" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.164114 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gzg5\" (UniqueName: \"kubernetes.io/projected/27c9d3ac-a1e3-4354-b3b2-31bc32818a60-kube-api-access-9gzg5\") pod \"neutron-db-create-mwtqz\" (UID: \"27c9d3ac-a1e3-4354-b3b2-31bc32818a60\") " pod="openstack/neutron-db-create-mwtqz" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.164148 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa69cf56-800b-45ac-8f74-18f393900d61-operator-scripts\") pod \"neutron-58b8-account-create-95g6x\" (UID: \"aa69cf56-800b-45ac-8f74-18f393900d61\") " pod="openstack/neutron-58b8-account-create-95g6x" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.164200 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf8rr\" (UniqueName: \"kubernetes.io/projected/1eb045c8-8071-479e-ae53-287767fb69b9-kube-api-access-wf8rr\") pod \"keystone-db-sync-cdl2v\" (UID: \"1eb045c8-8071-479e-ae53-287767fb69b9\") " pod="openstack/keystone-db-sync-cdl2v" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.164246 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djwvz\" (UniqueName: \"kubernetes.io/projected/aa69cf56-800b-45ac-8f74-18f393900d61-kube-api-access-djwvz\") pod \"neutron-58b8-account-create-95g6x\" (UID: \"aa69cf56-800b-45ac-8f74-18f393900d61\") " pod="openstack/neutron-58b8-account-create-95g6x" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.168204 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c9d3ac-a1e3-4354-b3b2-31bc32818a60-operator-scripts\") pod \"neutron-db-create-mwtqz\" (UID: \"27c9d3ac-a1e3-4354-b3b2-31bc32818a60\") " pod="openstack/neutron-db-create-mwtqz" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.170985 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb045c8-8071-479e-ae53-287767fb69b9-config-data\") pod \"keystone-db-sync-cdl2v\" (UID: \"1eb045c8-8071-479e-ae53-287767fb69b9\") " pod="openstack/keystone-db-sync-cdl2v" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.175987 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb045c8-8071-479e-ae53-287767fb69b9-combined-ca-bundle\") pod \"keystone-db-sync-cdl2v\" (UID: \"1eb045c8-8071-479e-ae53-287767fb69b9\") " pod="openstack/keystone-db-sync-cdl2v" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.188870 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gzg5\" (UniqueName: \"kubernetes.io/projected/27c9d3ac-a1e3-4354-b3b2-31bc32818a60-kube-api-access-9gzg5\") pod \"neutron-db-create-mwtqz\" (UID: \"27c9d3ac-a1e3-4354-b3b2-31bc32818a60\") " pod="openstack/neutron-db-create-mwtqz" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.204930 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf8rr\" (UniqueName: \"kubernetes.io/projected/1eb045c8-8071-479e-ae53-287767fb69b9-kube-api-access-wf8rr\") pod \"keystone-db-sync-cdl2v\" (UID: \"1eb045c8-8071-479e-ae53-287767fb69b9\") " pod="openstack/keystone-db-sync-cdl2v" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.220284 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f62d-account-create-24psn" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.268807 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa69cf56-800b-45ac-8f74-18f393900d61-operator-scripts\") pod \"neutron-58b8-account-create-95g6x\" (UID: \"aa69cf56-800b-45ac-8f74-18f393900d61\") " pod="openstack/neutron-58b8-account-create-95g6x" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.268939 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djwvz\" (UniqueName: \"kubernetes.io/projected/aa69cf56-800b-45ac-8f74-18f393900d61-kube-api-access-djwvz\") pod \"neutron-58b8-account-create-95g6x\" (UID: \"aa69cf56-800b-45ac-8f74-18f393900d61\") " pod="openstack/neutron-58b8-account-create-95g6x" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.269491 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa69cf56-800b-45ac-8f74-18f393900d61-operator-scripts\") pod \"neutron-58b8-account-create-95g6x\" (UID: \"aa69cf56-800b-45ac-8f74-18f393900d61\") " pod="openstack/neutron-58b8-account-create-95g6x" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.287038 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djwvz\" (UniqueName: \"kubernetes.io/projected/aa69cf56-800b-45ac-8f74-18f393900d61-kube-api-access-djwvz\") pod \"neutron-58b8-account-create-95g6x\" (UID: \"aa69cf56-800b-45ac-8f74-18f393900d61\") " pod="openstack/neutron-58b8-account-create-95g6x" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.330344 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mwtqz" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.340937 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cdl2v" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.505616 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58b8-account-create-95g6x" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.549210 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-szw8b"] Nov 25 07:32:56 crc kubenswrapper[5043]: W1125 07:32:56.554768 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeda179e9_563d_426d_b4a9_1aca3f47acfe.slice/crio-68e038744795d175ed7fdb6e4639991ce764b07e52f0387955005580f52bd473 WatchSource:0}: Error finding container 68e038744795d175ed7fdb6e4639991ce764b07e52f0387955005580f52bd473: Status 404 returned error can't find the container with id 68e038744795d175ed7fdb6e4639991ce764b07e52f0387955005580f52bd473 Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.678500 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9a38-account-create-4ldc5"] Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.687876 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4xbtr"] Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.794895 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f62d-account-create-24psn"] Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.799861 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-szw8b" event={"ID":"eda179e9-563d-426d-b4a9-1aca3f47acfe","Type":"ContainerStarted","Data":"c31d71c70f6037f883d0758f93b18ef8949b3c68648ef1a295aad9aea02f2467"} Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.799907 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-szw8b" event={"ID":"eda179e9-563d-426d-b4a9-1aca3f47acfe","Type":"ContainerStarted","Data":"68e038744795d175ed7fdb6e4639991ce764b07e52f0387955005580f52bd473"} Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.801453 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9a38-account-create-4ldc5" event={"ID":"8b63c108-4aa7-49c8-a12c-51554851c41e","Type":"ContainerStarted","Data":"c523c13cf72597795bb00412c561cf918e78f49afbf5045c1e43987a7594ae48"} Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.809441 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4xbtr" event={"ID":"a2859a00-00bb-4358-a60e-083415c768e1","Type":"ContainerStarted","Data":"75b1f81fd5ca9f319572f87b06b234e2b475787dc1750dfae6e92b28acdd6086"} Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.834141 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-szw8b" podStartSLOduration=1.8341234229999999 podStartE2EDuration="1.834123423s" podCreationTimestamp="2025-11-25 07:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:32:56.822658355 +0000 UTC m=+1040.990854086" watchObservedRunningTime="2025-11-25 07:32:56.834123423 +0000 UTC m=+1041.002319144" Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.869166 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mwtqz"] Nov 25 07:32:56 crc kubenswrapper[5043]: W1125 07:32:56.873186 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27c9d3ac_a1e3_4354_b3b2_31bc32818a60.slice/crio-f0b9f767d74b1a325288a789ce493431ead059e36eed374c864a4f9f0cd0b4ab WatchSource:0}: Error finding container f0b9f767d74b1a325288a789ce493431ead059e36eed374c864a4f9f0cd0b4ab: Status 404 returned error can't find the container with id f0b9f767d74b1a325288a789ce493431ead059e36eed374c864a4f9f0cd0b4ab Nov 25 07:32:56 crc kubenswrapper[5043]: I1125 07:32:56.958267 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-cdl2v"] Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.040326 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58b8-account-create-95g6x"] Nov 25 07:32:57 crc kubenswrapper[5043]: W1125 07:32:57.120815 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa69cf56_800b_45ac_8f74_18f393900d61.slice/crio-1bf2c663022f5c779a891f2772251d2b5a47ca945966cf0b101651bf04f8332d WatchSource:0}: Error finding container 1bf2c663022f5c779a891f2772251d2b5a47ca945966cf0b101651bf04f8332d: Status 404 returned error can't find the container with id 1bf2c663022f5c779a891f2772251d2b5a47ca945966cf0b101651bf04f8332d Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.412795 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cn67d" Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.493026 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-db-sync-config-data\") pod \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\" (UID: \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\") " Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.493347 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd6zq\" (UniqueName: \"kubernetes.io/projected/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-kube-api-access-bd6zq\") pod \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\" (UID: \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\") " Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.493401 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-config-data\") pod \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\" (UID: \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\") " Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.493433 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-combined-ca-bundle\") pod \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\" (UID: \"9a15aa1c-a8ff-46d3-9893-a3ea429171b8\") " Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.498727 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9a15aa1c-a8ff-46d3-9893-a3ea429171b8" (UID: "9a15aa1c-a8ff-46d3-9893-a3ea429171b8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.513088 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-kube-api-access-bd6zq" (OuterVolumeSpecName: "kube-api-access-bd6zq") pod "9a15aa1c-a8ff-46d3-9893-a3ea429171b8" (UID: "9a15aa1c-a8ff-46d3-9893-a3ea429171b8"). InnerVolumeSpecName "kube-api-access-bd6zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.524772 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a15aa1c-a8ff-46d3-9893-a3ea429171b8" (UID: "9a15aa1c-a8ff-46d3-9893-a3ea429171b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.550122 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-config-data" (OuterVolumeSpecName: "config-data") pod "9a15aa1c-a8ff-46d3-9893-a3ea429171b8" (UID: "9a15aa1c-a8ff-46d3-9893-a3ea429171b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.595661 5043 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.595715 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd6zq\" (UniqueName: \"kubernetes.io/projected/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-kube-api-access-bd6zq\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.595732 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.595744 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a15aa1c-a8ff-46d3-9893-a3ea429171b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.817668 5043 generic.go:334] "Generic (PLEG): container finished" podID="8b63c108-4aa7-49c8-a12c-51554851c41e" containerID="5a8600776776123457e72ccfed961a2b07316700ad3ff95c800322249dc7ef78" exitCode=0 Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.817726 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9a38-account-create-4ldc5" event={"ID":"8b63c108-4aa7-49c8-a12c-51554851c41e","Type":"ContainerDied","Data":"5a8600776776123457e72ccfed961a2b07316700ad3ff95c800322249dc7ef78"} Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.819906 5043 generic.go:334] "Generic (PLEG): container finished" podID="27c9d3ac-a1e3-4354-b3b2-31bc32818a60" containerID="0c4d45c85703f31b14930296b4c5a163f5c9e3feffd780f0535dcbc54a113c2d" exitCode=0 Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.819948 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mwtqz" event={"ID":"27c9d3ac-a1e3-4354-b3b2-31bc32818a60","Type":"ContainerDied","Data":"0c4d45c85703f31b14930296b4c5a163f5c9e3feffd780f0535dcbc54a113c2d"} Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.819964 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mwtqz" event={"ID":"27c9d3ac-a1e3-4354-b3b2-31bc32818a60","Type":"ContainerStarted","Data":"f0b9f767d74b1a325288a789ce493431ead059e36eed374c864a4f9f0cd0b4ab"} Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.821397 5043 generic.go:334] "Generic (PLEG): container finished" podID="a2859a00-00bb-4358-a60e-083415c768e1" containerID="4cb15547b8709d3294837362c16e6b4594c441118d6ecaf76b064edda9e78b45" exitCode=0 Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.821433 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4xbtr" event={"ID":"a2859a00-00bb-4358-a60e-083415c768e1","Type":"ContainerDied","Data":"4cb15547b8709d3294837362c16e6b4594c441118d6ecaf76b064edda9e78b45"} Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.822543 5043 generic.go:334] "Generic (PLEG): container finished" podID="aa69cf56-800b-45ac-8f74-18f393900d61" containerID="22ebb668532db10b70a217bedb9a1ee4f24cd9cc62b4cc04a2fa73e511198be5" exitCode=0 Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.822576 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58b8-account-create-95g6x" event={"ID":"aa69cf56-800b-45ac-8f74-18f393900d61","Type":"ContainerDied","Data":"22ebb668532db10b70a217bedb9a1ee4f24cd9cc62b4cc04a2fa73e511198be5"} Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.822590 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58b8-account-create-95g6x" event={"ID":"aa69cf56-800b-45ac-8f74-18f393900d61","Type":"ContainerStarted","Data":"1bf2c663022f5c779a891f2772251d2b5a47ca945966cf0b101651bf04f8332d"} Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.823985 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cn67d" event={"ID":"9a15aa1c-a8ff-46d3-9893-a3ea429171b8","Type":"ContainerDied","Data":"c9e0be70bb9a449588a008f6842b7213930470640b82845920519ef013be22fe"} Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.824010 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9e0be70bb9a449588a008f6842b7213930470640b82845920519ef013be22fe" Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.824056 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cn67d" Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.832131 5043 generic.go:334] "Generic (PLEG): container finished" podID="4d943615-5ac6-450f-aeec-baa4d0833e9b" containerID="aa79346d8afebd8bca6a1dcf82cccc5641b4ce2f03f8c33ffa3a7c1d45f1e55b" exitCode=0 Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.832201 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f62d-account-create-24psn" event={"ID":"4d943615-5ac6-450f-aeec-baa4d0833e9b","Type":"ContainerDied","Data":"aa79346d8afebd8bca6a1dcf82cccc5641b4ce2f03f8c33ffa3a7c1d45f1e55b"} Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.832235 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f62d-account-create-24psn" event={"ID":"4d943615-5ac6-450f-aeec-baa4d0833e9b","Type":"ContainerStarted","Data":"42934f19cd8f4ee04ab518e062bec5c1fe9926ab0d5f127d66eef930de334d5f"} Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.834652 5043 generic.go:334] "Generic (PLEG): container finished" podID="eda179e9-563d-426d-b4a9-1aca3f47acfe" containerID="c31d71c70f6037f883d0758f93b18ef8949b3c68648ef1a295aad9aea02f2467" exitCode=0 Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.834701 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-szw8b" event={"ID":"eda179e9-563d-426d-b4a9-1aca3f47acfe","Type":"ContainerDied","Data":"c31d71c70f6037f883d0758f93b18ef8949b3c68648ef1a295aad9aea02f2467"} Nov 25 07:32:57 crc kubenswrapper[5043]: I1125 07:32:57.835875 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cdl2v" event={"ID":"1eb045c8-8071-479e-ae53-287767fb69b9","Type":"ContainerStarted","Data":"1a1ac29a0df4fa4d7c34eb92501414b120ccc95987ec2c4a618f100a1a645a54"} Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.255776 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75b58765b5-wwz57"] Nov 25 07:32:58 crc kubenswrapper[5043]: E1125 07:32:58.256155 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a15aa1c-a8ff-46d3-9893-a3ea429171b8" containerName="glance-db-sync" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.256175 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a15aa1c-a8ff-46d3-9893-a3ea429171b8" containerName="glance-db-sync" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.256327 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a15aa1c-a8ff-46d3-9893-a3ea429171b8" containerName="glance-db-sync" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.257205 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.282315 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b58765b5-wwz57"] Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.317379 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-ovsdbserver-sb\") pod \"dnsmasq-dns-75b58765b5-wwz57\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.317734 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-ovsdbserver-nb\") pod \"dnsmasq-dns-75b58765b5-wwz57\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.317760 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-config\") pod \"dnsmasq-dns-75b58765b5-wwz57\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.317790 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-dns-svc\") pod \"dnsmasq-dns-75b58765b5-wwz57\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.317817 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6grr\" (UniqueName: \"kubernetes.io/projected/1d4c3178-0a9a-44b3-b956-7d3024661593-kube-api-access-s6grr\") pod \"dnsmasq-dns-75b58765b5-wwz57\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.419272 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-ovsdbserver-sb\") pod \"dnsmasq-dns-75b58765b5-wwz57\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.419358 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-ovsdbserver-nb\") pod \"dnsmasq-dns-75b58765b5-wwz57\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.419385 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-config\") pod \"dnsmasq-dns-75b58765b5-wwz57\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.419410 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-dns-svc\") pod \"dnsmasq-dns-75b58765b5-wwz57\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.419439 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6grr\" (UniqueName: \"kubernetes.io/projected/1d4c3178-0a9a-44b3-b956-7d3024661593-kube-api-access-s6grr\") pod \"dnsmasq-dns-75b58765b5-wwz57\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.420279 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-ovsdbserver-sb\") pod \"dnsmasq-dns-75b58765b5-wwz57\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.421165 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-ovsdbserver-nb\") pod \"dnsmasq-dns-75b58765b5-wwz57\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.421264 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-dns-svc\") pod \"dnsmasq-dns-75b58765b5-wwz57\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.421816 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-config\") pod \"dnsmasq-dns-75b58765b5-wwz57\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.434447 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6grr\" (UniqueName: \"kubernetes.io/projected/1d4c3178-0a9a-44b3-b956-7d3024661593-kube-api-access-s6grr\") pod \"dnsmasq-dns-75b58765b5-wwz57\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:32:58 crc kubenswrapper[5043]: I1125 07:32:58.577713 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.005986 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b58765b5-wwz57"] Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.208811 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9a38-account-create-4ldc5" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.334916 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b63c108-4aa7-49c8-a12c-51554851c41e-operator-scripts\") pod \"8b63c108-4aa7-49c8-a12c-51554851c41e\" (UID: \"8b63c108-4aa7-49c8-a12c-51554851c41e\") " Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.335348 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8nbs\" (UniqueName: \"kubernetes.io/projected/8b63c108-4aa7-49c8-a12c-51554851c41e-kube-api-access-s8nbs\") pod \"8b63c108-4aa7-49c8-a12c-51554851c41e\" (UID: \"8b63c108-4aa7-49c8-a12c-51554851c41e\") " Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.335827 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b63c108-4aa7-49c8-a12c-51554851c41e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b63c108-4aa7-49c8-a12c-51554851c41e" (UID: "8b63c108-4aa7-49c8-a12c-51554851c41e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.341809 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b63c108-4aa7-49c8-a12c-51554851c41e-kube-api-access-s8nbs" (OuterVolumeSpecName: "kube-api-access-s8nbs") pod "8b63c108-4aa7-49c8-a12c-51554851c41e" (UID: "8b63c108-4aa7-49c8-a12c-51554851c41e"). InnerVolumeSpecName "kube-api-access-s8nbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.353018 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58b8-account-create-95g6x" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.402721 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f62d-account-create-24psn" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.411627 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4xbtr" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.415016 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-szw8b" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.425996 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mwtqz" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.436671 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djwvz\" (UniqueName: \"kubernetes.io/projected/aa69cf56-800b-45ac-8f74-18f393900d61-kube-api-access-djwvz\") pod \"aa69cf56-800b-45ac-8f74-18f393900d61\" (UID: \"aa69cf56-800b-45ac-8f74-18f393900d61\") " Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.436907 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa69cf56-800b-45ac-8f74-18f393900d61-operator-scripts\") pod \"aa69cf56-800b-45ac-8f74-18f393900d61\" (UID: \"aa69cf56-800b-45ac-8f74-18f393900d61\") " Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.437264 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8nbs\" (UniqueName: \"kubernetes.io/projected/8b63c108-4aa7-49c8-a12c-51554851c41e-kube-api-access-s8nbs\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.437281 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b63c108-4aa7-49c8-a12c-51554851c41e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.437942 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa69cf56-800b-45ac-8f74-18f393900d61-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa69cf56-800b-45ac-8f74-18f393900d61" (UID: "aa69cf56-800b-45ac-8f74-18f393900d61"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.441323 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa69cf56-800b-45ac-8f74-18f393900d61-kube-api-access-djwvz" (OuterVolumeSpecName: "kube-api-access-djwvz") pod "aa69cf56-800b-45ac-8f74-18f393900d61" (UID: "aa69cf56-800b-45ac-8f74-18f393900d61"). InnerVolumeSpecName "kube-api-access-djwvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.537906 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d943615-5ac6-450f-aeec-baa4d0833e9b-operator-scripts\") pod \"4d943615-5ac6-450f-aeec-baa4d0833e9b\" (UID: \"4d943615-5ac6-450f-aeec-baa4d0833e9b\") " Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.537973 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gzg5\" (UniqueName: \"kubernetes.io/projected/27c9d3ac-a1e3-4354-b3b2-31bc32818a60-kube-api-access-9gzg5\") pod \"27c9d3ac-a1e3-4354-b3b2-31bc32818a60\" (UID: \"27c9d3ac-a1e3-4354-b3b2-31bc32818a60\") " Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.538023 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c9d3ac-a1e3-4354-b3b2-31bc32818a60-operator-scripts\") pod \"27c9d3ac-a1e3-4354-b3b2-31bc32818a60\" (UID: \"27c9d3ac-a1e3-4354-b3b2-31bc32818a60\") " Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.538066 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-298f7\" (UniqueName: \"kubernetes.io/projected/4d943615-5ac6-450f-aeec-baa4d0833e9b-kube-api-access-298f7\") pod \"4d943615-5ac6-450f-aeec-baa4d0833e9b\" (UID: \"4d943615-5ac6-450f-aeec-baa4d0833e9b\") " Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.538170 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda179e9-563d-426d-b4a9-1aca3f47acfe-operator-scripts\") pod \"eda179e9-563d-426d-b4a9-1aca3f47acfe\" (UID: \"eda179e9-563d-426d-b4a9-1aca3f47acfe\") " Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.538240 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzwmp\" (UniqueName: \"kubernetes.io/projected/a2859a00-00bb-4358-a60e-083415c768e1-kube-api-access-wzwmp\") pod \"a2859a00-00bb-4358-a60e-083415c768e1\" (UID: \"a2859a00-00bb-4358-a60e-083415c768e1\") " Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.538270 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2859a00-00bb-4358-a60e-083415c768e1-operator-scripts\") pod \"a2859a00-00bb-4358-a60e-083415c768e1\" (UID: \"a2859a00-00bb-4358-a60e-083415c768e1\") " Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.538587 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d943615-5ac6-450f-aeec-baa4d0833e9b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d943615-5ac6-450f-aeec-baa4d0833e9b" (UID: "4d943615-5ac6-450f-aeec-baa4d0833e9b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.538671 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda179e9-563d-426d-b4a9-1aca3f47acfe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eda179e9-563d-426d-b4a9-1aca3f47acfe" (UID: "eda179e9-563d-426d-b4a9-1aca3f47acfe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.538681 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2859a00-00bb-4358-a60e-083415c768e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a2859a00-00bb-4358-a60e-083415c768e1" (UID: "a2859a00-00bb-4358-a60e-083415c768e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.538960 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkxsm\" (UniqueName: \"kubernetes.io/projected/eda179e9-563d-426d-b4a9-1aca3f47acfe-kube-api-access-dkxsm\") pod \"eda179e9-563d-426d-b4a9-1aca3f47acfe\" (UID: \"eda179e9-563d-426d-b4a9-1aca3f47acfe\") " Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.539005 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c9d3ac-a1e3-4354-b3b2-31bc32818a60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27c9d3ac-a1e3-4354-b3b2-31bc32818a60" (UID: "27c9d3ac-a1e3-4354-b3b2-31bc32818a60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.539323 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djwvz\" (UniqueName: \"kubernetes.io/projected/aa69cf56-800b-45ac-8f74-18f393900d61-kube-api-access-djwvz\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.539347 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d943615-5ac6-450f-aeec-baa4d0833e9b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.539358 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c9d3ac-a1e3-4354-b3b2-31bc32818a60-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.539369 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda179e9-563d-426d-b4a9-1aca3f47acfe-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.539379 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa69cf56-800b-45ac-8f74-18f393900d61-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.539391 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2859a00-00bb-4358-a60e-083415c768e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.542289 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda179e9-563d-426d-b4a9-1aca3f47acfe-kube-api-access-dkxsm" (OuterVolumeSpecName: "kube-api-access-dkxsm") pod "eda179e9-563d-426d-b4a9-1aca3f47acfe" (UID: "eda179e9-563d-426d-b4a9-1aca3f47acfe"). InnerVolumeSpecName "kube-api-access-dkxsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.543306 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d943615-5ac6-450f-aeec-baa4d0833e9b-kube-api-access-298f7" (OuterVolumeSpecName: "kube-api-access-298f7") pod "4d943615-5ac6-450f-aeec-baa4d0833e9b" (UID: "4d943615-5ac6-450f-aeec-baa4d0833e9b"). InnerVolumeSpecName "kube-api-access-298f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.543860 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2859a00-00bb-4358-a60e-083415c768e1-kube-api-access-wzwmp" (OuterVolumeSpecName: "kube-api-access-wzwmp") pod "a2859a00-00bb-4358-a60e-083415c768e1" (UID: "a2859a00-00bb-4358-a60e-083415c768e1"). InnerVolumeSpecName "kube-api-access-wzwmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.544238 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c9d3ac-a1e3-4354-b3b2-31bc32818a60-kube-api-access-9gzg5" (OuterVolumeSpecName: "kube-api-access-9gzg5") pod "27c9d3ac-a1e3-4354-b3b2-31bc32818a60" (UID: "27c9d3ac-a1e3-4354-b3b2-31bc32818a60"). InnerVolumeSpecName "kube-api-access-9gzg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.640567 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzwmp\" (UniqueName: \"kubernetes.io/projected/a2859a00-00bb-4358-a60e-083415c768e1-kube-api-access-wzwmp\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.640643 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkxsm\" (UniqueName: \"kubernetes.io/projected/eda179e9-563d-426d-b4a9-1aca3f47acfe-kube-api-access-dkxsm\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.640657 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gzg5\" (UniqueName: \"kubernetes.io/projected/27c9d3ac-a1e3-4354-b3b2-31bc32818a60-kube-api-access-9gzg5\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.640672 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-298f7\" (UniqueName: \"kubernetes.io/projected/4d943615-5ac6-450f-aeec-baa4d0833e9b-kube-api-access-298f7\") on node \"crc\" DevicePath \"\"" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.862180 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4xbtr" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.862210 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4xbtr" event={"ID":"a2859a00-00bb-4358-a60e-083415c768e1","Type":"ContainerDied","Data":"75b1f81fd5ca9f319572f87b06b234e2b475787dc1750dfae6e92b28acdd6086"} Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.862336 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75b1f81fd5ca9f319572f87b06b234e2b475787dc1750dfae6e92b28acdd6086" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.864809 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58b8-account-create-95g6x" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.864789 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58b8-account-create-95g6x" event={"ID":"aa69cf56-800b-45ac-8f74-18f393900d61","Type":"ContainerDied","Data":"1bf2c663022f5c779a891f2772251d2b5a47ca945966cf0b101651bf04f8332d"} Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.864965 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bf2c663022f5c779a891f2772251d2b5a47ca945966cf0b101651bf04f8332d" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.866291 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f62d-account-create-24psn" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.866291 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f62d-account-create-24psn" event={"ID":"4d943615-5ac6-450f-aeec-baa4d0833e9b","Type":"ContainerDied","Data":"42934f19cd8f4ee04ab518e062bec5c1fe9926ab0d5f127d66eef930de334d5f"} Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.866444 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42934f19cd8f4ee04ab518e062bec5c1fe9926ab0d5f127d66eef930de334d5f" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.868243 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-szw8b" event={"ID":"eda179e9-563d-426d-b4a9-1aca3f47acfe","Type":"ContainerDied","Data":"68e038744795d175ed7fdb6e4639991ce764b07e52f0387955005580f52bd473"} Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.868268 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68e038744795d175ed7fdb6e4639991ce764b07e52f0387955005580f52bd473" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.868337 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-szw8b" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.870454 5043 generic.go:334] "Generic (PLEG): container finished" podID="1d4c3178-0a9a-44b3-b956-7d3024661593" containerID="0dd421468fea7a8b8bf89e958972a7c3c728b97bb1544cd27ead479202cfc859" exitCode=0 Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.870559 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" event={"ID":"1d4c3178-0a9a-44b3-b956-7d3024661593","Type":"ContainerDied","Data":"0dd421468fea7a8b8bf89e958972a7c3c728b97bb1544cd27ead479202cfc859"} Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.870703 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" event={"ID":"1d4c3178-0a9a-44b3-b956-7d3024661593","Type":"ContainerStarted","Data":"036facbab21f88be321066c0f60121ebc6207106141f2688714397e00f8ca1ff"} Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.873420 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9a38-account-create-4ldc5" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.873429 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9a38-account-create-4ldc5" event={"ID":"8b63c108-4aa7-49c8-a12c-51554851c41e","Type":"ContainerDied","Data":"c523c13cf72597795bb00412c561cf918e78f49afbf5045c1e43987a7594ae48"} Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.873472 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c523c13cf72597795bb00412c561cf918e78f49afbf5045c1e43987a7594ae48" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.879105 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mwtqz" event={"ID":"27c9d3ac-a1e3-4354-b3b2-31bc32818a60","Type":"ContainerDied","Data":"f0b9f767d74b1a325288a789ce493431ead059e36eed374c864a4f9f0cd0b4ab"} Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.879148 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0b9f767d74b1a325288a789ce493431ead059e36eed374c864a4f9f0cd0b4ab" Nov 25 07:32:59 crc kubenswrapper[5043]: I1125 07:32:59.879178 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mwtqz" Nov 25 07:33:03 crc kubenswrapper[5043]: I1125 07:33:03.915994 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cdl2v" event={"ID":"1eb045c8-8071-479e-ae53-287767fb69b9","Type":"ContainerStarted","Data":"675e623c76b689689c591b0c7fba3f6b32642b63e1bb62af9ec60c7ad4289a9c"} Nov 25 07:33:03 crc kubenswrapper[5043]: I1125 07:33:03.924578 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" event={"ID":"1d4c3178-0a9a-44b3-b956-7d3024661593","Type":"ContainerStarted","Data":"c275bdb92ddc262f7c7fa026fd9654fe49a792a9bdc1e8fd1d4cf678dbd59511"} Nov 25 07:33:03 crc kubenswrapper[5043]: I1125 07:33:03.925131 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:33:03 crc kubenswrapper[5043]: I1125 07:33:03.956100 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-cdl2v" podStartSLOduration=3.186917867 podStartE2EDuration="8.95607354s" podCreationTimestamp="2025-11-25 07:32:55 +0000 UTC" firstStartedPulling="2025-11-25 07:32:57.037906634 +0000 UTC m=+1041.206102355" lastFinishedPulling="2025-11-25 07:33:02.807062307 +0000 UTC m=+1046.975258028" observedRunningTime="2025-11-25 07:33:03.946768941 +0000 UTC m=+1048.114964672" watchObservedRunningTime="2025-11-25 07:33:03.95607354 +0000 UTC m=+1048.124269261" Nov 25 07:33:08 crc kubenswrapper[5043]: I1125 07:33:08.580012 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:33:08 crc kubenswrapper[5043]: I1125 07:33:08.610649 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" podStartSLOduration=10.61058193 podStartE2EDuration="10.61058193s" podCreationTimestamp="2025-11-25 07:32:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:33:03.977275668 +0000 UTC m=+1048.145471469" watchObservedRunningTime="2025-11-25 07:33:08.61058193 +0000 UTC m=+1052.778777691" Nov 25 07:33:08 crc kubenswrapper[5043]: I1125 07:33:08.665328 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6b5695-xndwd"] Nov 25 07:33:08 crc kubenswrapper[5043]: I1125 07:33:08.674775 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" podUID="42d37f88-26c8-40e9-9995-c91c110d3d91" containerName="dnsmasq-dns" containerID="cri-o://4b367fbc326ee5be9f2284932fc40a508303681398b3bdf53f630d560834f35a" gracePeriod=10 Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.818930 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.869770 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-config\") pod \"42d37f88-26c8-40e9-9995-c91c110d3d91\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.869846 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-dns-svc\") pod \"42d37f88-26c8-40e9-9995-c91c110d3d91\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.869888 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-ovsdbserver-sb\") pod \"42d37f88-26c8-40e9-9995-c91c110d3d91\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.869942 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-ovsdbserver-nb\") pod \"42d37f88-26c8-40e9-9995-c91c110d3d91\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.869999 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwvd\" (UniqueName: \"kubernetes.io/projected/42d37f88-26c8-40e9-9995-c91c110d3d91-kube-api-access-kfwvd\") pod \"42d37f88-26c8-40e9-9995-c91c110d3d91\" (UID: \"42d37f88-26c8-40e9-9995-c91c110d3d91\") " Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.878830 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d37f88-26c8-40e9-9995-c91c110d3d91-kube-api-access-kfwvd" (OuterVolumeSpecName: "kube-api-access-kfwvd") pod "42d37f88-26c8-40e9-9995-c91c110d3d91" (UID: "42d37f88-26c8-40e9-9995-c91c110d3d91"). InnerVolumeSpecName "kube-api-access-kfwvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.911654 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-config" (OuterVolumeSpecName: "config") pod "42d37f88-26c8-40e9-9995-c91c110d3d91" (UID: "42d37f88-26c8-40e9-9995-c91c110d3d91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.912758 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "42d37f88-26c8-40e9-9995-c91c110d3d91" (UID: "42d37f88-26c8-40e9-9995-c91c110d3d91"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.928422 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "42d37f88-26c8-40e9-9995-c91c110d3d91" (UID: "42d37f88-26c8-40e9-9995-c91c110d3d91"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.931225 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42d37f88-26c8-40e9-9995-c91c110d3d91" (UID: "42d37f88-26c8-40e9-9995-c91c110d3d91"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.971760 5043 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.971793 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.971807 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.971820 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwvd\" (UniqueName: \"kubernetes.io/projected/42d37f88-26c8-40e9-9995-c91c110d3d91-kube-api-access-kfwvd\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.971833 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d37f88-26c8-40e9-9995-c91c110d3d91-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.972181 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.972191 5043 generic.go:334] "Generic (PLEG): container finished" podID="42d37f88-26c8-40e9-9995-c91c110d3d91" containerID="4b367fbc326ee5be9f2284932fc40a508303681398b3bdf53f630d560834f35a" exitCode=0 Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.972224 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" event={"ID":"42d37f88-26c8-40e9-9995-c91c110d3d91","Type":"ContainerDied","Data":"4b367fbc326ee5be9f2284932fc40a508303681398b3bdf53f630d560834f35a"} Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.972266 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6b5695-xndwd" event={"ID":"42d37f88-26c8-40e9-9995-c91c110d3d91","Type":"ContainerDied","Data":"25503533ea393419353669c6e79df926654fbd81e319aea4f5663ecd96684d93"} Nov 25 07:33:09 crc kubenswrapper[5043]: I1125 07:33:09.972293 5043 scope.go:117] "RemoveContainer" containerID="4b367fbc326ee5be9f2284932fc40a508303681398b3bdf53f630d560834f35a" Nov 25 07:33:10 crc kubenswrapper[5043]: I1125 07:33:10.020179 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6b5695-xndwd"] Nov 25 07:33:10 crc kubenswrapper[5043]: I1125 07:33:10.026723 5043 scope.go:117] "RemoveContainer" containerID="a11022b25956f7c7f1dfbea86dd72ecd2c6609cf8347387968a701f8cabfa74f" Nov 25 07:33:10 crc kubenswrapper[5043]: I1125 07:33:10.027848 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6b5695-xndwd"] Nov 25 07:33:10 crc kubenswrapper[5043]: I1125 07:33:10.057021 5043 scope.go:117] "RemoveContainer" containerID="4b367fbc326ee5be9f2284932fc40a508303681398b3bdf53f630d560834f35a" Nov 25 07:33:10 crc kubenswrapper[5043]: E1125 07:33:10.057547 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b367fbc326ee5be9f2284932fc40a508303681398b3bdf53f630d560834f35a\": container with ID starting with 4b367fbc326ee5be9f2284932fc40a508303681398b3bdf53f630d560834f35a not found: ID does not exist" containerID="4b367fbc326ee5be9f2284932fc40a508303681398b3bdf53f630d560834f35a" Nov 25 07:33:10 crc kubenswrapper[5043]: I1125 07:33:10.057581 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b367fbc326ee5be9f2284932fc40a508303681398b3bdf53f630d560834f35a"} err="failed to get container status \"4b367fbc326ee5be9f2284932fc40a508303681398b3bdf53f630d560834f35a\": rpc error: code = NotFound desc = could not find container \"4b367fbc326ee5be9f2284932fc40a508303681398b3bdf53f630d560834f35a\": container with ID starting with 4b367fbc326ee5be9f2284932fc40a508303681398b3bdf53f630d560834f35a not found: ID does not exist" Nov 25 07:33:10 crc kubenswrapper[5043]: I1125 07:33:10.057703 5043 scope.go:117] "RemoveContainer" containerID="a11022b25956f7c7f1dfbea86dd72ecd2c6609cf8347387968a701f8cabfa74f" Nov 25 07:33:10 crc kubenswrapper[5043]: E1125 07:33:10.057996 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a11022b25956f7c7f1dfbea86dd72ecd2c6609cf8347387968a701f8cabfa74f\": container with ID starting with a11022b25956f7c7f1dfbea86dd72ecd2c6609cf8347387968a701f8cabfa74f not found: ID does not exist" containerID="a11022b25956f7c7f1dfbea86dd72ecd2c6609cf8347387968a701f8cabfa74f" Nov 25 07:33:10 crc kubenswrapper[5043]: I1125 07:33:10.058022 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a11022b25956f7c7f1dfbea86dd72ecd2c6609cf8347387968a701f8cabfa74f"} err="failed to get container status \"a11022b25956f7c7f1dfbea86dd72ecd2c6609cf8347387968a701f8cabfa74f\": rpc error: code = NotFound desc = could not find container \"a11022b25956f7c7f1dfbea86dd72ecd2c6609cf8347387968a701f8cabfa74f\": container with ID starting with a11022b25956f7c7f1dfbea86dd72ecd2c6609cf8347387968a701f8cabfa74f not found: ID does not exist" Nov 25 07:33:10 crc kubenswrapper[5043]: I1125 07:33:10.980195 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42d37f88-26c8-40e9-9995-c91c110d3d91" path="/var/lib/kubelet/pods/42d37f88-26c8-40e9-9995-c91c110d3d91/volumes" Nov 25 07:33:11 crc kubenswrapper[5043]: I1125 07:33:11.998400 5043 generic.go:334] "Generic (PLEG): container finished" podID="1eb045c8-8071-479e-ae53-287767fb69b9" containerID="675e623c76b689689c591b0c7fba3f6b32642b63e1bb62af9ec60c7ad4289a9c" exitCode=0 Nov 25 07:33:11 crc kubenswrapper[5043]: I1125 07:33:11.998581 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cdl2v" event={"ID":"1eb045c8-8071-479e-ae53-287767fb69b9","Type":"ContainerDied","Data":"675e623c76b689689c591b0c7fba3f6b32642b63e1bb62af9ec60c7ad4289a9c"} Nov 25 07:33:14 crc kubenswrapper[5043]: I1125 07:33:14.901057 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cdl2v" Nov 25 07:33:15 crc kubenswrapper[5043]: I1125 07:33:15.033210 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-cdl2v" event={"ID":"1eb045c8-8071-479e-ae53-287767fb69b9","Type":"ContainerDied","Data":"1a1ac29a0df4fa4d7c34eb92501414b120ccc95987ec2c4a618f100a1a645a54"} Nov 25 07:33:15 crc kubenswrapper[5043]: I1125 07:33:15.033257 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a1ac29a0df4fa4d7c34eb92501414b120ccc95987ec2c4a618f100a1a645a54" Nov 25 07:33:15 crc kubenswrapper[5043]: I1125 07:33:15.034055 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-cdl2v" Nov 25 07:33:15 crc kubenswrapper[5043]: I1125 07:33:15.057172 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf8rr\" (UniqueName: \"kubernetes.io/projected/1eb045c8-8071-479e-ae53-287767fb69b9-kube-api-access-wf8rr\") pod \"1eb045c8-8071-479e-ae53-287767fb69b9\" (UID: \"1eb045c8-8071-479e-ae53-287767fb69b9\") " Nov 25 07:33:15 crc kubenswrapper[5043]: I1125 07:33:15.057277 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb045c8-8071-479e-ae53-287767fb69b9-config-data\") pod \"1eb045c8-8071-479e-ae53-287767fb69b9\" (UID: \"1eb045c8-8071-479e-ae53-287767fb69b9\") " Nov 25 07:33:15 crc kubenswrapper[5043]: I1125 07:33:15.057360 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb045c8-8071-479e-ae53-287767fb69b9-combined-ca-bundle\") pod \"1eb045c8-8071-479e-ae53-287767fb69b9\" (UID: \"1eb045c8-8071-479e-ae53-287767fb69b9\") " Nov 25 07:33:15 crc kubenswrapper[5043]: I1125 07:33:15.068660 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb045c8-8071-479e-ae53-287767fb69b9-kube-api-access-wf8rr" (OuterVolumeSpecName: "kube-api-access-wf8rr") pod "1eb045c8-8071-479e-ae53-287767fb69b9" (UID: "1eb045c8-8071-479e-ae53-287767fb69b9"). InnerVolumeSpecName "kube-api-access-wf8rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:33:15 crc kubenswrapper[5043]: I1125 07:33:15.082727 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eb045c8-8071-479e-ae53-287767fb69b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1eb045c8-8071-479e-ae53-287767fb69b9" (UID: "1eb045c8-8071-479e-ae53-287767fb69b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:33:15 crc kubenswrapper[5043]: I1125 07:33:15.106911 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eb045c8-8071-479e-ae53-287767fb69b9-config-data" (OuterVolumeSpecName: "config-data") pod "1eb045c8-8071-479e-ae53-287767fb69b9" (UID: "1eb045c8-8071-479e-ae53-287767fb69b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:33:15 crc kubenswrapper[5043]: I1125 07:33:15.158891 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf8rr\" (UniqueName: \"kubernetes.io/projected/1eb045c8-8071-479e-ae53-287767fb69b9-kube-api-access-wf8rr\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:15 crc kubenswrapper[5043]: I1125 07:33:15.158926 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb045c8-8071-479e-ae53-287767fb69b9-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:15 crc kubenswrapper[5043]: I1125 07:33:15.158937 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb045c8-8071-479e-ae53-287767fb69b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.175513 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d44dbddd5-w5xn4"] Nov 25 07:33:16 crc kubenswrapper[5043]: E1125 07:33:16.177471 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa69cf56-800b-45ac-8f74-18f393900d61" containerName="mariadb-account-create" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.177510 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa69cf56-800b-45ac-8f74-18f393900d61" containerName="mariadb-account-create" Nov 25 07:33:16 crc kubenswrapper[5043]: E1125 07:33:16.177557 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d37f88-26c8-40e9-9995-c91c110d3d91" containerName="dnsmasq-dns" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.177564 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d37f88-26c8-40e9-9995-c91c110d3d91" containerName="dnsmasq-dns" Nov 25 07:33:16 crc kubenswrapper[5043]: E1125 07:33:16.177575 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d37f88-26c8-40e9-9995-c91c110d3d91" containerName="init" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.177581 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d37f88-26c8-40e9-9995-c91c110d3d91" containerName="init" Nov 25 07:33:16 crc kubenswrapper[5043]: E1125 07:33:16.177592 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d943615-5ac6-450f-aeec-baa4d0833e9b" containerName="mariadb-account-create" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.177613 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d943615-5ac6-450f-aeec-baa4d0833e9b" containerName="mariadb-account-create" Nov 25 07:33:16 crc kubenswrapper[5043]: E1125 07:33:16.177639 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c9d3ac-a1e3-4354-b3b2-31bc32818a60" containerName="mariadb-database-create" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.177645 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c9d3ac-a1e3-4354-b3b2-31bc32818a60" containerName="mariadb-database-create" Nov 25 07:33:16 crc kubenswrapper[5043]: E1125 07:33:16.177663 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda179e9-563d-426d-b4a9-1aca3f47acfe" containerName="mariadb-database-create" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.177670 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda179e9-563d-426d-b4a9-1aca3f47acfe" containerName="mariadb-database-create" Nov 25 07:33:16 crc kubenswrapper[5043]: E1125 07:33:16.177697 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2859a00-00bb-4358-a60e-083415c768e1" containerName="mariadb-database-create" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.177703 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2859a00-00bb-4358-a60e-083415c768e1" containerName="mariadb-database-create" Nov 25 07:33:16 crc kubenswrapper[5043]: E1125 07:33:16.177713 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b63c108-4aa7-49c8-a12c-51554851c41e" containerName="mariadb-account-create" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.177719 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b63c108-4aa7-49c8-a12c-51554851c41e" containerName="mariadb-account-create" Nov 25 07:33:16 crc kubenswrapper[5043]: E1125 07:33:16.177746 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb045c8-8071-479e-ae53-287767fb69b9" containerName="keystone-db-sync" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.177754 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb045c8-8071-479e-ae53-287767fb69b9" containerName="keystone-db-sync" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.178074 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb045c8-8071-479e-ae53-287767fb69b9" containerName="keystone-db-sync" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.178098 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="27c9d3ac-a1e3-4354-b3b2-31bc32818a60" containerName="mariadb-database-create" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.178108 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d943615-5ac6-450f-aeec-baa4d0833e9b" containerName="mariadb-account-create" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.178125 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d37f88-26c8-40e9-9995-c91c110d3d91" containerName="dnsmasq-dns" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.178137 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2859a00-00bb-4358-a60e-083415c768e1" containerName="mariadb-database-create" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.178153 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa69cf56-800b-45ac-8f74-18f393900d61" containerName="mariadb-account-create" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.178163 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b63c108-4aa7-49c8-a12c-51554851c41e" containerName="mariadb-account-create" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.178178 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda179e9-563d-426d-b4a9-1aca3f47acfe" containerName="mariadb-database-create" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.182659 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.246706 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d44dbddd5-w5xn4"] Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.281356 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-ovsdbserver-nb\") pod \"dnsmasq-dns-5d44dbddd5-w5xn4\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.281412 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-config\") pod \"dnsmasq-dns-5d44dbddd5-w5xn4\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.281442 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pqq8\" (UniqueName: \"kubernetes.io/projected/13cb5adf-61bb-46e9-9585-bf6630622591-kube-api-access-7pqq8\") pod \"dnsmasq-dns-5d44dbddd5-w5xn4\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.281474 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-ovsdbserver-sb\") pod \"dnsmasq-dns-5d44dbddd5-w5xn4\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.281529 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-dns-svc\") pod \"dnsmasq-dns-5d44dbddd5-w5xn4\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.285276 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hvtdd"] Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.286672 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.304822 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.305046 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.305178 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.305321 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xt29v" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.305585 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.332069 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hvtdd"] Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.386523 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-dns-svc\") pod \"dnsmasq-dns-5d44dbddd5-w5xn4\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.386597 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-ovsdbserver-nb\") pod \"dnsmasq-dns-5d44dbddd5-w5xn4\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.386651 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-config\") pod \"dnsmasq-dns-5d44dbddd5-w5xn4\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.386680 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pqq8\" (UniqueName: \"kubernetes.io/projected/13cb5adf-61bb-46e9-9585-bf6630622591-kube-api-access-7pqq8\") pod \"dnsmasq-dns-5d44dbddd5-w5xn4\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.386714 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-ovsdbserver-sb\") pod \"dnsmasq-dns-5d44dbddd5-w5xn4\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.387587 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-ovsdbserver-sb\") pod \"dnsmasq-dns-5d44dbddd5-w5xn4\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.388116 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-dns-svc\") pod \"dnsmasq-dns-5d44dbddd5-w5xn4\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.388736 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-ovsdbserver-nb\") pod \"dnsmasq-dns-5d44dbddd5-w5xn4\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.392737 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-jzr78"] Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.393685 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-config\") pod \"dnsmasq-dns-5d44dbddd5-w5xn4\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.394413 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.398500 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b99879f8c-ftjwm"] Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.404976 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ctkqr" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.405150 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.405468 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.405895 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.412971 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jzr78"] Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.413198 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.413285 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-xvhqf" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.413529 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.413679 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.427478 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b99879f8c-ftjwm"] Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.431510 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pqq8\" (UniqueName: \"kubernetes.io/projected/13cb5adf-61bb-46e9-9585-bf6630622591-kube-api-access-7pqq8\") pod \"dnsmasq-dns-5d44dbddd5-w5xn4\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.482141 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.488526 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-credential-keys\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.488789 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-scripts\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.488906 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-combined-ca-bundle\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.489005 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-fernet-keys\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.489151 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-config-data\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.489259 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvblh\" (UniqueName: \"kubernetes.io/projected/09506302-add3-4161-8e13-c1c43c3e2b0f-kube-api-access-fvblh\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.490921 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.501031 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.501538 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.530392 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.533091 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.576963 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-tnm6z"] Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.588067 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tnm6z" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590373 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed3bc362-7637-4288-b028-62e7d813bba0-config-data\") pod \"horizon-7b99879f8c-ftjwm\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590426 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590454 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-config-data\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590476 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-log-httpd\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590500 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-config-data\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590528 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-scripts\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590552 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-combined-ca-bundle\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590581 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-config-data\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590649 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvblh\" (UniqueName: \"kubernetes.io/projected/09506302-add3-4161-8e13-c1c43c3e2b0f-kube-api-access-fvblh\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590674 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-scripts\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590696 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js899\" (UniqueName: \"kubernetes.io/projected/ed3bc362-7637-4288-b028-62e7d813bba0-kube-api-access-js899\") pod \"horizon-7b99879f8c-ftjwm\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590721 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed3bc362-7637-4288-b028-62e7d813bba0-horizon-secret-key\") pod \"horizon-7b99879f8c-ftjwm\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590750 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8nhj\" (UniqueName: \"kubernetes.io/projected/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-kube-api-access-v8nhj\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590775 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590796 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lkch\" (UniqueName: \"kubernetes.io/projected/2de64291-b46f-4ba3-bdec-a3bad5873881-kube-api-access-7lkch\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590854 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-run-httpd\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590876 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-credential-keys\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590904 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-scripts\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590928 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-db-sync-config-data\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590964 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-combined-ca-bundle\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.590989 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2de64291-b46f-4ba3-bdec-a3bad5873881-etc-machine-id\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.591016 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-fernet-keys\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.591041 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed3bc362-7637-4288-b028-62e7d813bba0-scripts\") pod \"horizon-7b99879f8c-ftjwm\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.591095 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed3bc362-7637-4288-b028-62e7d813bba0-logs\") pod \"horizon-7b99879f8c-ftjwm\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.594582 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-scripts\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.607365 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-combined-ca-bundle\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.609237 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-fernet-keys\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.610406 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-config-data\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.618561 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.625982 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.626352 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5htmq" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.632131 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-credential-keys\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.661346 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvblh\" (UniqueName: \"kubernetes.io/projected/09506302-add3-4161-8e13-c1c43c3e2b0f-kube-api-access-fvblh\") pod \"keystone-bootstrap-hvtdd\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.675840 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tnm6z"] Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.692547 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed3bc362-7637-4288-b028-62e7d813bba0-config-data\") pod \"horizon-7b99879f8c-ftjwm\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.692627 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.692649 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-config-data\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.692669 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-log-httpd\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.692691 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-config-data\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.692718 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-scripts\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.692739 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-combined-ca-bundle\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.692773 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-scripts\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.692797 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js899\" (UniqueName: \"kubernetes.io/projected/ed3bc362-7637-4288-b028-62e7d813bba0-kube-api-access-js899\") pod \"horizon-7b99879f8c-ftjwm\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.692817 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed3bc362-7637-4288-b028-62e7d813bba0-horizon-secret-key\") pod \"horizon-7b99879f8c-ftjwm\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.692842 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8nhj\" (UniqueName: \"kubernetes.io/projected/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-kube-api-access-v8nhj\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.692864 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.692888 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lkch\" (UniqueName: \"kubernetes.io/projected/2de64291-b46f-4ba3-bdec-a3bad5873881-kube-api-access-7lkch\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.692913 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0360da29-fc4a-44ea-9d0e-e446d69037bc-config\") pod \"neutron-db-sync-tnm6z\" (UID: \"0360da29-fc4a-44ea-9d0e-e446d69037bc\") " pod="openstack/neutron-db-sync-tnm6z" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.692968 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv7sx\" (UniqueName: \"kubernetes.io/projected/0360da29-fc4a-44ea-9d0e-e446d69037bc-kube-api-access-dv7sx\") pod \"neutron-db-sync-tnm6z\" (UID: \"0360da29-fc4a-44ea-9d0e-e446d69037bc\") " pod="openstack/neutron-db-sync-tnm6z" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.692999 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-run-httpd\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.693030 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-db-sync-config-data\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.693061 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0360da29-fc4a-44ea-9d0e-e446d69037bc-combined-ca-bundle\") pod \"neutron-db-sync-tnm6z\" (UID: \"0360da29-fc4a-44ea-9d0e-e446d69037bc\") " pod="openstack/neutron-db-sync-tnm6z" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.693093 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2de64291-b46f-4ba3-bdec-a3bad5873881-etc-machine-id\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.693122 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed3bc362-7637-4288-b028-62e7d813bba0-scripts\") pod \"horizon-7b99879f8c-ftjwm\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.693146 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed3bc362-7637-4288-b028-62e7d813bba0-logs\") pod \"horizon-7b99879f8c-ftjwm\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.693583 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed3bc362-7637-4288-b028-62e7d813bba0-logs\") pod \"horizon-7b99879f8c-ftjwm\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.694779 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed3bc362-7637-4288-b028-62e7d813bba0-config-data\") pod \"horizon-7b99879f8c-ftjwm\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.699167 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-run-httpd\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.707702 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-log-httpd\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.708272 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-combined-ca-bundle\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.708553 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2de64291-b46f-4ba3-bdec-a3bad5873881-etc-machine-id\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.709294 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed3bc362-7637-4288-b028-62e7d813bba0-scripts\") pod \"horizon-7b99879f8c-ftjwm\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.711063 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.711260 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-config-data\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.716882 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-db-sync-config-data\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.716978 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed3bc362-7637-4288-b028-62e7d813bba0-horizon-secret-key\") pod \"horizon-7b99879f8c-ftjwm\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.722096 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-scripts\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.727526 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-scripts\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.728191 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-config-data\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.729058 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.743675 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8nhj\" (UniqueName: \"kubernetes.io/projected/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-kube-api-access-v8nhj\") pod \"ceilometer-0\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.778693 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js899\" (UniqueName: \"kubernetes.io/projected/ed3bc362-7637-4288-b028-62e7d813bba0-kube-api-access-js899\") pod \"horizon-7b99879f8c-ftjwm\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.780411 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lkch\" (UniqueName: \"kubernetes.io/projected/2de64291-b46f-4ba3-bdec-a3bad5873881-kube-api-access-7lkch\") pod \"cinder-db-sync-jzr78\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.794521 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jzr78" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.795797 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0360da29-fc4a-44ea-9d0e-e446d69037bc-config\") pod \"neutron-db-sync-tnm6z\" (UID: \"0360da29-fc4a-44ea-9d0e-e446d69037bc\") " pod="openstack/neutron-db-sync-tnm6z" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.796214 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv7sx\" (UniqueName: \"kubernetes.io/projected/0360da29-fc4a-44ea-9d0e-e446d69037bc-kube-api-access-dv7sx\") pod \"neutron-db-sync-tnm6z\" (UID: \"0360da29-fc4a-44ea-9d0e-e446d69037bc\") " pod="openstack/neutron-db-sync-tnm6z" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.796277 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0360da29-fc4a-44ea-9d0e-e446d69037bc-combined-ca-bundle\") pod \"neutron-db-sync-tnm6z\" (UID: \"0360da29-fc4a-44ea-9d0e-e446d69037bc\") " pod="openstack/neutron-db-sync-tnm6z" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.801043 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0360da29-fc4a-44ea-9d0e-e446d69037bc-combined-ca-bundle\") pod \"neutron-db-sync-tnm6z\" (UID: \"0360da29-fc4a-44ea-9d0e-e446d69037bc\") " pod="openstack/neutron-db-sync-tnm6z" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.801783 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0360da29-fc4a-44ea-9d0e-e446d69037bc-config\") pod \"neutron-db-sync-tnm6z\" (UID: \"0360da29-fc4a-44ea-9d0e-e446d69037bc\") " pod="openstack/neutron-db-sync-tnm6z" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.803020 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.834367 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.875891 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-2zt29"] Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.877058 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2zt29" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.894828 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2zt29"] Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.900484 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv7sx\" (UniqueName: \"kubernetes.io/projected/0360da29-fc4a-44ea-9d0e-e446d69037bc-kube-api-access-dv7sx\") pod \"neutron-db-sync-tnm6z\" (UID: \"0360da29-fc4a-44ea-9d0e-e446d69037bc\") " pod="openstack/neutron-db-sync-tnm6z" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.901043 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-lvw85" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.901580 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.937957 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.955823 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-zvlrd"] Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.956852 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zvlrd" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.969136 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-b5hzn" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.969349 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.969467 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 07:33:16 crc kubenswrapper[5043]: I1125 07:33:16.998816 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56fc7df589-k57nl"] Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.009471 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/66bda068-47b7-46f6-a75e-97dd76293fe9-db-sync-config-data\") pod \"barbican-db-sync-2zt29\" (UID: \"66bda068-47b7-46f6-a75e-97dd76293fe9\") " pod="openstack/barbican-db-sync-2zt29" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.009506 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bda068-47b7-46f6-a75e-97dd76293fe9-combined-ca-bundle\") pod \"barbican-db-sync-2zt29\" (UID: \"66bda068-47b7-46f6-a75e-97dd76293fe9\") " pod="openstack/barbican-db-sync-2zt29" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.009587 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsrt8\" (UniqueName: \"kubernetes.io/projected/66bda068-47b7-46f6-a75e-97dd76293fe9-kube-api-access-vsrt8\") pod \"barbican-db-sync-2zt29\" (UID: \"66bda068-47b7-46f6-a75e-97dd76293fe9\") " pod="openstack/barbican-db-sync-2zt29" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.020735 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-zvlrd"] Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.020770 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d44dbddd5-w5xn4"] Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.020789 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56fc7df589-k57nl"] Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.020866 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.027674 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tnm6z" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.030088 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f8f5cc67-bj7r7"] Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.031907 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.041713 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f8f5cc67-bj7r7"] Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.111058 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8ctt\" (UniqueName: \"kubernetes.io/projected/e9a33b7c-0771-42ee-b50d-abb6120f7fba-kube-api-access-r8ctt\") pod \"placement-db-sync-zvlrd\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " pod="openstack/placement-db-sync-zvlrd" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.111109 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-config-data\") pod \"placement-db-sync-zvlrd\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " pod="openstack/placement-db-sync-zvlrd" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.111133 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/66bda068-47b7-46f6-a75e-97dd76293fe9-db-sync-config-data\") pod \"barbican-db-sync-2zt29\" (UID: \"66bda068-47b7-46f6-a75e-97dd76293fe9\") " pod="openstack/barbican-db-sync-2zt29" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.111154 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bda068-47b7-46f6-a75e-97dd76293fe9-combined-ca-bundle\") pod \"barbican-db-sync-2zt29\" (UID: \"66bda068-47b7-46f6-a75e-97dd76293fe9\") " pod="openstack/barbican-db-sync-2zt29" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.111179 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-combined-ca-bundle\") pod \"placement-db-sync-zvlrd\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " pod="openstack/placement-db-sync-zvlrd" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.111309 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-scripts\") pod \"placement-db-sync-zvlrd\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " pod="openstack/placement-db-sync-zvlrd" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.111325 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9a33b7c-0771-42ee-b50d-abb6120f7fba-logs\") pod \"placement-db-sync-zvlrd\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " pod="openstack/placement-db-sync-zvlrd" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.111357 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsrt8\" (UniqueName: \"kubernetes.io/projected/66bda068-47b7-46f6-a75e-97dd76293fe9-kube-api-access-vsrt8\") pod \"barbican-db-sync-2zt29\" (UID: \"66bda068-47b7-46f6-a75e-97dd76293fe9\") " pod="openstack/barbican-db-sync-2zt29" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.116590 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/66bda068-47b7-46f6-a75e-97dd76293fe9-db-sync-config-data\") pod \"barbican-db-sync-2zt29\" (UID: \"66bda068-47b7-46f6-a75e-97dd76293fe9\") " pod="openstack/barbican-db-sync-2zt29" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.119953 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bda068-47b7-46f6-a75e-97dd76293fe9-combined-ca-bundle\") pod \"barbican-db-sync-2zt29\" (UID: \"66bda068-47b7-46f6-a75e-97dd76293fe9\") " pod="openstack/barbican-db-sync-2zt29" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.140052 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsrt8\" (UniqueName: \"kubernetes.io/projected/66bda068-47b7-46f6-a75e-97dd76293fe9-kube-api-access-vsrt8\") pod \"barbican-db-sync-2zt29\" (UID: \"66bda068-47b7-46f6-a75e-97dd76293fe9\") " pod="openstack/barbican-db-sync-2zt29" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.212958 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-scripts\") pod \"placement-db-sync-zvlrd\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " pod="openstack/placement-db-sync-zvlrd" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.212995 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9a33b7c-0771-42ee-b50d-abb6120f7fba-logs\") pod \"placement-db-sync-zvlrd\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " pod="openstack/placement-db-sync-zvlrd" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.213021 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-config\") pod \"dnsmasq-dns-7f8f5cc67-bj7r7\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.213052 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czlcf\" (UniqueName: \"kubernetes.io/projected/0f3fb8fb-1900-4301-92da-6665f0007d7e-kube-api-access-czlcf\") pod \"dnsmasq-dns-7f8f5cc67-bj7r7\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.213074 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-ovsdbserver-sb\") pod \"dnsmasq-dns-7f8f5cc67-bj7r7\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.213099 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-config-data\") pod \"horizon-56fc7df589-k57nl\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.213224 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-dns-svc\") pod \"dnsmasq-dns-7f8f5cc67-bj7r7\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.213253 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-ovsdbserver-nb\") pod \"dnsmasq-dns-7f8f5cc67-bj7r7\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.213299 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-logs\") pod \"horizon-56fc7df589-k57nl\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.213327 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8ctt\" (UniqueName: \"kubernetes.io/projected/e9a33b7c-0771-42ee-b50d-abb6120f7fba-kube-api-access-r8ctt\") pod \"placement-db-sync-zvlrd\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " pod="openstack/placement-db-sync-zvlrd" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.213378 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn2wq\" (UniqueName: \"kubernetes.io/projected/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-kube-api-access-rn2wq\") pod \"horizon-56fc7df589-k57nl\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.213404 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-config-data\") pod \"placement-db-sync-zvlrd\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " pod="openstack/placement-db-sync-zvlrd" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.213458 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-horizon-secret-key\") pod \"horizon-56fc7df589-k57nl\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.213968 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-combined-ca-bundle\") pod \"placement-db-sync-zvlrd\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " pod="openstack/placement-db-sync-zvlrd" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.213999 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-scripts\") pod \"horizon-56fc7df589-k57nl\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.214365 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9a33b7c-0771-42ee-b50d-abb6120f7fba-logs\") pod \"placement-db-sync-zvlrd\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " pod="openstack/placement-db-sync-zvlrd" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.216388 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-config-data\") pod \"placement-db-sync-zvlrd\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " pod="openstack/placement-db-sync-zvlrd" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.216955 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-scripts\") pod \"placement-db-sync-zvlrd\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " pod="openstack/placement-db-sync-zvlrd" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.217000 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-combined-ca-bundle\") pod \"placement-db-sync-zvlrd\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " pod="openstack/placement-db-sync-zvlrd" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.248077 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2zt29" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.266063 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8ctt\" (UniqueName: \"kubernetes.io/projected/e9a33b7c-0771-42ee-b50d-abb6120f7fba-kube-api-access-r8ctt\") pod \"placement-db-sync-zvlrd\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " pod="openstack/placement-db-sync-zvlrd" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.315260 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czlcf\" (UniqueName: \"kubernetes.io/projected/0f3fb8fb-1900-4301-92da-6665f0007d7e-kube-api-access-czlcf\") pod \"dnsmasq-dns-7f8f5cc67-bj7r7\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.315314 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-ovsdbserver-sb\") pod \"dnsmasq-dns-7f8f5cc67-bj7r7\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.315336 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-config-data\") pod \"horizon-56fc7df589-k57nl\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.315358 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-dns-svc\") pod \"dnsmasq-dns-7f8f5cc67-bj7r7\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.315376 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-ovsdbserver-nb\") pod \"dnsmasq-dns-7f8f5cc67-bj7r7\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.315400 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-logs\") pod \"horizon-56fc7df589-k57nl\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.315434 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn2wq\" (UniqueName: \"kubernetes.io/projected/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-kube-api-access-rn2wq\") pod \"horizon-56fc7df589-k57nl\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.315473 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-horizon-secret-key\") pod \"horizon-56fc7df589-k57nl\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.315509 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-scripts\") pod \"horizon-56fc7df589-k57nl\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.315552 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-config\") pod \"dnsmasq-dns-7f8f5cc67-bj7r7\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.316480 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-config\") pod \"dnsmasq-dns-7f8f5cc67-bj7r7\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.317449 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-ovsdbserver-sb\") pod \"dnsmasq-dns-7f8f5cc67-bj7r7\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.318690 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-logs\") pod \"horizon-56fc7df589-k57nl\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.318758 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-config-data\") pod \"horizon-56fc7df589-k57nl\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.319171 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zvlrd" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.320593 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-scripts\") pod \"horizon-56fc7df589-k57nl\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.321295 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-ovsdbserver-nb\") pod \"dnsmasq-dns-7f8f5cc67-bj7r7\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.322882 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-horizon-secret-key\") pod \"horizon-56fc7df589-k57nl\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.324031 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-dns-svc\") pod \"dnsmasq-dns-7f8f5cc67-bj7r7\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.338018 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czlcf\" (UniqueName: \"kubernetes.io/projected/0f3fb8fb-1900-4301-92da-6665f0007d7e-kube-api-access-czlcf\") pod \"dnsmasq-dns-7f8f5cc67-bj7r7\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.341512 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d44dbddd5-w5xn4"] Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.358501 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn2wq\" (UniqueName: \"kubernetes.io/projected/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-kube-api-access-rn2wq\") pod \"horizon-56fc7df589-k57nl\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.360713 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.445966 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.568200 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jzr78"] Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.665297 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hvtdd"] Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.689681 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b99879f8c-ftjwm"] Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.698773 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:33:17 crc kubenswrapper[5043]: W1125 07:33:17.927796 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66bda068_47b7_46f6_a75e_97dd76293fe9.slice/crio-3307f1f8b33af40d5c4cc29b170e690fc448116ccfde35f37a1a408c3b3df5a5 WatchSource:0}: Error finding container 3307f1f8b33af40d5c4cc29b170e690fc448116ccfde35f37a1a408c3b3df5a5: Status 404 returned error can't find the container with id 3307f1f8b33af40d5c4cc29b170e690fc448116ccfde35f37a1a408c3b3df5a5 Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.927588 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2zt29"] Nov 25 07:33:17 crc kubenswrapper[5043]: I1125 07:33:17.959288 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tnm6z"] Nov 25 07:33:17 crc kubenswrapper[5043]: W1125 07:33:17.965310 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0360da29_fc4a_44ea_9d0e_e446d69037bc.slice/crio-a1aa78d4b891551a71753a3471a9d9c18e3716d5bd69fbf69f4dfc402fff4761 WatchSource:0}: Error finding container a1aa78d4b891551a71753a3471a9d9c18e3716d5bd69fbf69f4dfc402fff4761: Status 404 returned error can't find the container with id a1aa78d4b891551a71753a3471a9d9c18e3716d5bd69fbf69f4dfc402fff4761 Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.082207 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56fc7df589-k57nl"] Nov 25 07:33:18 crc kubenswrapper[5043]: W1125 07:33:18.084853 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9a33b7c_0771_42ee_b50d_abb6120f7fba.slice/crio-7405c92fe2375d8c40a1c5a0ffc72ff99f21d641e545d699a48084e7eac33774 WatchSource:0}: Error finding container 7405c92fe2375d8c40a1c5a0ffc72ff99f21d641e545d699a48084e7eac33774: Status 404 returned error can't find the container with id 7405c92fe2375d8c40a1c5a0ffc72ff99f21d641e545d699a48084e7eac33774 Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.088279 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-zvlrd"] Nov 25 07:33:18 crc kubenswrapper[5043]: W1125 07:33:18.092048 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe721f71_ddcd_45a4_9625_f8f41e91ac8b.slice/crio-beaa4cb360260782c03b9cac0a48b02d31f013928df99333a9c658790154c5a1 WatchSource:0}: Error finding container beaa4cb360260782c03b9cac0a48b02d31f013928df99333a9c658790154c5a1: Status 404 returned error can't find the container with id beaa4cb360260782c03b9cac0a48b02d31f013928df99333a9c658790154c5a1 Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.113006 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zvlrd" event={"ID":"e9a33b7c-0771-42ee-b50d-abb6120f7fba","Type":"ContainerStarted","Data":"7405c92fe2375d8c40a1c5a0ffc72ff99f21d641e545d699a48084e7eac33774"} Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.114103 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2zt29" event={"ID":"66bda068-47b7-46f6-a75e-97dd76293fe9","Type":"ContainerStarted","Data":"3307f1f8b33af40d5c4cc29b170e690fc448116ccfde35f37a1a408c3b3df5a5"} Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.115253 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56fc7df589-k57nl" event={"ID":"fe721f71-ddcd-45a4-9625-f8f41e91ac8b","Type":"ContainerStarted","Data":"beaa4cb360260782c03b9cac0a48b02d31f013928df99333a9c658790154c5a1"} Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.116405 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hvtdd" event={"ID":"09506302-add3-4161-8e13-c1c43c3e2b0f","Type":"ContainerStarted","Data":"8a9a4c356f3f9be1342fd7f50c8a82679ecc9c01b7dcda43f524012785bf8697"} Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.119590 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b99879f8c-ftjwm" event={"ID":"ed3bc362-7637-4288-b028-62e7d813bba0","Type":"ContainerStarted","Data":"21bad1fec766a2caa5c99ae50a19295057a802260a5783dd054f56630351523e"} Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.120397 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9","Type":"ContainerStarted","Data":"e9cf9522bccdec51db944ac1244e0013d95c86ec39e90b11686af5cd9096ff79"} Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.121093 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jzr78" event={"ID":"2de64291-b46f-4ba3-bdec-a3bad5873881","Type":"ContainerStarted","Data":"e04cac920939a082c45c511754431868f16232d2938bfd58addf0aab4cc417c8"} Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.121846 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" event={"ID":"13cb5adf-61bb-46e9-9585-bf6630622591","Type":"ContainerStarted","Data":"be07d284b8052c44fc39297c895969d8273932ee18d56c751c04de6ed12a5dd2"} Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.123000 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tnm6z" event={"ID":"0360da29-fc4a-44ea-9d0e-e446d69037bc","Type":"ContainerStarted","Data":"a1aa78d4b891551a71753a3471a9d9c18e3716d5bd69fbf69f4dfc402fff4761"} Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.206721 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f8f5cc67-bj7r7"] Nov 25 07:33:18 crc kubenswrapper[5043]: W1125 07:33:18.213588 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f3fb8fb_1900_4301_92da_6665f0007d7e.slice/crio-c5be10fa51c182f2c08a9ade8330a3735d177dd1e5bb134c10cc6b8226e315a6 WatchSource:0}: Error finding container c5be10fa51c182f2c08a9ade8330a3735d177dd1e5bb134c10cc6b8226e315a6: Status 404 returned error can't find the container with id c5be10fa51c182f2c08a9ade8330a3735d177dd1e5bb134c10cc6b8226e315a6 Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.757056 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56fc7df589-k57nl"] Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.790516 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.806661 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cf546d8d9-mh4vw"] Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.808080 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.829414 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cf546d8d9-mh4vw"] Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.945868 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-config-data\") pod \"horizon-6cf546d8d9-mh4vw\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.946261 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-horizon-secret-key\") pod \"horizon-6cf546d8d9-mh4vw\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.946309 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-scripts\") pod \"horizon-6cf546d8d9-mh4vw\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.946335 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-logs\") pod \"horizon-6cf546d8d9-mh4vw\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:18 crc kubenswrapper[5043]: I1125 07:33:18.946355 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp2wz\" (UniqueName: \"kubernetes.io/projected/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-kube-api-access-jp2wz\") pod \"horizon-6cf546d8d9-mh4vw\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.048136 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-scripts\") pod \"horizon-6cf546d8d9-mh4vw\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.048242 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-logs\") pod \"horizon-6cf546d8d9-mh4vw\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.048272 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp2wz\" (UniqueName: \"kubernetes.io/projected/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-kube-api-access-jp2wz\") pod \"horizon-6cf546d8d9-mh4vw\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.048331 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-config-data\") pod \"horizon-6cf546d8d9-mh4vw\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.048388 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-horizon-secret-key\") pod \"horizon-6cf546d8d9-mh4vw\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.048896 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-logs\") pod \"horizon-6cf546d8d9-mh4vw\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.049223 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-scripts\") pod \"horizon-6cf546d8d9-mh4vw\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.050096 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-config-data\") pod \"horizon-6cf546d8d9-mh4vw\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.060741 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-horizon-secret-key\") pod \"horizon-6cf546d8d9-mh4vw\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.071137 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp2wz\" (UniqueName: \"kubernetes.io/projected/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-kube-api-access-jp2wz\") pod \"horizon-6cf546d8d9-mh4vw\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.143020 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.147888 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hvtdd" event={"ID":"09506302-add3-4161-8e13-c1c43c3e2b0f","Type":"ContainerStarted","Data":"de734fa7785c5dd7ca0067be0ffadfa0d854720554e95047470229d42e275c7f"} Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.152071 5043 generic.go:334] "Generic (PLEG): container finished" podID="13cb5adf-61bb-46e9-9585-bf6630622591" containerID="0f01442f2081831c074b67ae58784cd427aff81b13b29da27398fe9a108e2644" exitCode=0 Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.152168 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" event={"ID":"13cb5adf-61bb-46e9-9585-bf6630622591","Type":"ContainerDied","Data":"0f01442f2081831c074b67ae58784cd427aff81b13b29da27398fe9a108e2644"} Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.179122 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hvtdd" podStartSLOduration=3.179099737 podStartE2EDuration="3.179099737s" podCreationTimestamp="2025-11-25 07:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:33:19.173934559 +0000 UTC m=+1063.342130300" watchObservedRunningTime="2025-11-25 07:33:19.179099737 +0000 UTC m=+1063.347295448" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.187630 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tnm6z" event={"ID":"0360da29-fc4a-44ea-9d0e-e446d69037bc","Type":"ContainerStarted","Data":"49a8893f812fa5273ec4b7368978d5da18eacfb331f5a18927d9d709a7ebc952"} Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.189498 5043 generic.go:334] "Generic (PLEG): container finished" podID="0f3fb8fb-1900-4301-92da-6665f0007d7e" containerID="6054d4e4c2bebd11facdccac435b38e7313dcca13e32a605a69c649d4040e271" exitCode=0 Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.189649 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" event={"ID":"0f3fb8fb-1900-4301-92da-6665f0007d7e","Type":"ContainerDied","Data":"6054d4e4c2bebd11facdccac435b38e7313dcca13e32a605a69c649d4040e271"} Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.189728 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" event={"ID":"0f3fb8fb-1900-4301-92da-6665f0007d7e","Type":"ContainerStarted","Data":"c5be10fa51c182f2c08a9ade8330a3735d177dd1e5bb134c10cc6b8226e315a6"} Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.274541 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-tnm6z" podStartSLOduration=3.274522234 podStartE2EDuration="3.274522234s" podCreationTimestamp="2025-11-25 07:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:33:19.235971621 +0000 UTC m=+1063.404167342" watchObservedRunningTime="2025-11-25 07:33:19.274522234 +0000 UTC m=+1063.442717955" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.630134 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.753704 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cf546d8d9-mh4vw"] Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.768081 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-dns-svc\") pod \"13cb5adf-61bb-46e9-9585-bf6630622591\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.768140 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-ovsdbserver-sb\") pod \"13cb5adf-61bb-46e9-9585-bf6630622591\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.768200 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-config\") pod \"13cb5adf-61bb-46e9-9585-bf6630622591\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.768223 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-ovsdbserver-nb\") pod \"13cb5adf-61bb-46e9-9585-bf6630622591\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.768263 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pqq8\" (UniqueName: \"kubernetes.io/projected/13cb5adf-61bb-46e9-9585-bf6630622591-kube-api-access-7pqq8\") pod \"13cb5adf-61bb-46e9-9585-bf6630622591\" (UID: \"13cb5adf-61bb-46e9-9585-bf6630622591\") " Nov 25 07:33:19 crc kubenswrapper[5043]: W1125 07:33:19.780004 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bdacce3_f6fa_45c8_be3b_b7473af1a8ed.slice/crio-f1f7b0ee2950b4219aa64f4f3a1775e45c02b78427b1c668c0784d8828ecde2d WatchSource:0}: Error finding container f1f7b0ee2950b4219aa64f4f3a1775e45c02b78427b1c668c0784d8828ecde2d: Status 404 returned error can't find the container with id f1f7b0ee2950b4219aa64f4f3a1775e45c02b78427b1c668c0784d8828ecde2d Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.780260 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13cb5adf-61bb-46e9-9585-bf6630622591-kube-api-access-7pqq8" (OuterVolumeSpecName: "kube-api-access-7pqq8") pod "13cb5adf-61bb-46e9-9585-bf6630622591" (UID: "13cb5adf-61bb-46e9-9585-bf6630622591"). InnerVolumeSpecName "kube-api-access-7pqq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.803461 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "13cb5adf-61bb-46e9-9585-bf6630622591" (UID: "13cb5adf-61bb-46e9-9585-bf6630622591"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.810534 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "13cb5adf-61bb-46e9-9585-bf6630622591" (UID: "13cb5adf-61bb-46e9-9585-bf6630622591"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.814412 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "13cb5adf-61bb-46e9-9585-bf6630622591" (UID: "13cb5adf-61bb-46e9-9585-bf6630622591"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.819377 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-config" (OuterVolumeSpecName: "config") pod "13cb5adf-61bb-46e9-9585-bf6630622591" (UID: "13cb5adf-61bb-46e9-9585-bf6630622591"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.869928 5043 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.869957 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.869969 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.869978 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13cb5adf-61bb-46e9-9585-bf6630622591-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:19 crc kubenswrapper[5043]: I1125 07:33:19.869986 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pqq8\" (UniqueName: \"kubernetes.io/projected/13cb5adf-61bb-46e9-9585-bf6630622591-kube-api-access-7pqq8\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:20 crc kubenswrapper[5043]: I1125 07:33:20.207060 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cf546d8d9-mh4vw" event={"ID":"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed","Type":"ContainerStarted","Data":"f1f7b0ee2950b4219aa64f4f3a1775e45c02b78427b1c668c0784d8828ecde2d"} Nov 25 07:33:20 crc kubenswrapper[5043]: I1125 07:33:20.209448 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" event={"ID":"0f3fb8fb-1900-4301-92da-6665f0007d7e","Type":"ContainerStarted","Data":"8a8b9353ff6d88c1bf6e611406bf15d298c57793c1f460022c75c6b95af9a995"} Nov 25 07:33:20 crc kubenswrapper[5043]: I1125 07:33:20.209690 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:20 crc kubenswrapper[5043]: I1125 07:33:20.215889 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" Nov 25 07:33:20 crc kubenswrapper[5043]: I1125 07:33:20.217442 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d44dbddd5-w5xn4" event={"ID":"13cb5adf-61bb-46e9-9585-bf6630622591","Type":"ContainerDied","Data":"be07d284b8052c44fc39297c895969d8273932ee18d56c751c04de6ed12a5dd2"} Nov 25 07:33:20 crc kubenswrapper[5043]: I1125 07:33:20.217494 5043 scope.go:117] "RemoveContainer" containerID="0f01442f2081831c074b67ae58784cd427aff81b13b29da27398fe9a108e2644" Nov 25 07:33:20 crc kubenswrapper[5043]: I1125 07:33:20.245618 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" podStartSLOduration=4.245582948 podStartE2EDuration="4.245582948s" podCreationTimestamp="2025-11-25 07:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:33:20.230823713 +0000 UTC m=+1064.399019434" watchObservedRunningTime="2025-11-25 07:33:20.245582948 +0000 UTC m=+1064.413778689" Nov 25 07:33:20 crc kubenswrapper[5043]: I1125 07:33:20.284303 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d44dbddd5-w5xn4"] Nov 25 07:33:20 crc kubenswrapper[5043]: I1125 07:33:20.293324 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d44dbddd5-w5xn4"] Nov 25 07:33:20 crc kubenswrapper[5043]: I1125 07:33:20.984548 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13cb5adf-61bb-46e9-9585-bf6630622591" path="/var/lib/kubelet/pods/13cb5adf-61bb-46e9-9585-bf6630622591/volumes" Nov 25 07:33:23 crc kubenswrapper[5043]: I1125 07:33:23.248444 5043 generic.go:334] "Generic (PLEG): container finished" podID="09506302-add3-4161-8e13-c1c43c3e2b0f" containerID="de734fa7785c5dd7ca0067be0ffadfa0d854720554e95047470229d42e275c7f" exitCode=0 Nov 25 07:33:23 crc kubenswrapper[5043]: I1125 07:33:23.248544 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hvtdd" event={"ID":"09506302-add3-4161-8e13-c1c43c3e2b0f","Type":"ContainerDied","Data":"de734fa7785c5dd7ca0067be0ffadfa0d854720554e95047470229d42e275c7f"} Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.506401 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b99879f8c-ftjwm"] Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.535205 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-b5bc7cfb-2sfts"] Nov 25 07:33:25 crc kubenswrapper[5043]: E1125 07:33:25.535563 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13cb5adf-61bb-46e9-9585-bf6630622591" containerName="init" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.535574 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="13cb5adf-61bb-46e9-9585-bf6630622591" containerName="init" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.535747 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="13cb5adf-61bb-46e9-9585-bf6630622591" containerName="init" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.536631 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.546098 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.554063 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b5bc7cfb-2sfts"] Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.626332 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6cf546d8d9-mh4vw"] Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.669356 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5f67c4b5d4-f96jj"] Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.675326 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.685591 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f67c4b5d4-f96jj"] Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.692049 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-combined-ca-bundle\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.692109 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmwd9\" (UniqueName: \"kubernetes.io/projected/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-kube-api-access-qmwd9\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.692145 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-horizon-tls-certs\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.692213 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-config-data\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.692289 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-logs\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.692357 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-horizon-secret-key\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.692428 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-scripts\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.793434 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-horizon-secret-key\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.793490 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13e8a8ee-bfe8-415b-b76f-89d7d7296659-scripts\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.793544 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/13e8a8ee-bfe8-415b-b76f-89d7d7296659-horizon-secret-key\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.793570 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-scripts\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.793587 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13e8a8ee-bfe8-415b-b76f-89d7d7296659-config-data\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.793625 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13e8a8ee-bfe8-415b-b76f-89d7d7296659-logs\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.793644 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-combined-ca-bundle\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.793675 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmwd9\" (UniqueName: \"kubernetes.io/projected/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-kube-api-access-qmwd9\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.793697 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-horizon-tls-certs\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.793722 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-config-data\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.793746 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e8a8ee-bfe8-415b-b76f-89d7d7296659-combined-ca-bundle\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.793778 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqvcc\" (UniqueName: \"kubernetes.io/projected/13e8a8ee-bfe8-415b-b76f-89d7d7296659-kube-api-access-sqvcc\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.793797 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/13e8a8ee-bfe8-415b-b76f-89d7d7296659-horizon-tls-certs\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.793819 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-logs\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.795435 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-config-data\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.795463 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-scripts\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.796323 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-logs\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.802321 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-horizon-secret-key\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.802628 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-horizon-tls-certs\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.803332 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-combined-ca-bundle\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.812752 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmwd9\" (UniqueName: \"kubernetes.io/projected/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-kube-api-access-qmwd9\") pod \"horizon-b5bc7cfb-2sfts\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.884227 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.895818 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e8a8ee-bfe8-415b-b76f-89d7d7296659-combined-ca-bundle\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.895885 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqvcc\" (UniqueName: \"kubernetes.io/projected/13e8a8ee-bfe8-415b-b76f-89d7d7296659-kube-api-access-sqvcc\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.895907 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/13e8a8ee-bfe8-415b-b76f-89d7d7296659-horizon-tls-certs\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.895973 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13e8a8ee-bfe8-415b-b76f-89d7d7296659-scripts\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.896007 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/13e8a8ee-bfe8-415b-b76f-89d7d7296659-horizon-secret-key\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.896029 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13e8a8ee-bfe8-415b-b76f-89d7d7296659-config-data\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.896057 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13e8a8ee-bfe8-415b-b76f-89d7d7296659-logs\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.896519 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13e8a8ee-bfe8-415b-b76f-89d7d7296659-logs\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.898890 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13e8a8ee-bfe8-415b-b76f-89d7d7296659-scripts\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.899000 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13e8a8ee-bfe8-415b-b76f-89d7d7296659-config-data\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.903924 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e8a8ee-bfe8-415b-b76f-89d7d7296659-combined-ca-bundle\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.904207 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/13e8a8ee-bfe8-415b-b76f-89d7d7296659-horizon-secret-key\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.916156 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/13e8a8ee-bfe8-415b-b76f-89d7d7296659-horizon-tls-certs\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:25 crc kubenswrapper[5043]: I1125 07:33:25.923200 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqvcc\" (UniqueName: \"kubernetes.io/projected/13e8a8ee-bfe8-415b-b76f-89d7d7296659-kube-api-access-sqvcc\") pod \"horizon-5f67c4b5d4-f96jj\" (UID: \"13e8a8ee-bfe8-415b-b76f-89d7d7296659\") " pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:26 crc kubenswrapper[5043]: I1125 07:33:26.002310 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:33:27 crc kubenswrapper[5043]: I1125 07:33:27.453111 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:33:27 crc kubenswrapper[5043]: I1125 07:33:27.516401 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b58765b5-wwz57"] Nov 25 07:33:27 crc kubenswrapper[5043]: I1125 07:33:27.516678 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" podUID="1d4c3178-0a9a-44b3-b956-7d3024661593" containerName="dnsmasq-dns" containerID="cri-o://c275bdb92ddc262f7c7fa026fd9654fe49a792a9bdc1e8fd1d4cf678dbd59511" gracePeriod=10 Nov 25 07:33:28 crc kubenswrapper[5043]: I1125 07:33:28.296119 5043 generic.go:334] "Generic (PLEG): container finished" podID="1d4c3178-0a9a-44b3-b956-7d3024661593" containerID="c275bdb92ddc262f7c7fa026fd9654fe49a792a9bdc1e8fd1d4cf678dbd59511" exitCode=0 Nov 25 07:33:28 crc kubenswrapper[5043]: I1125 07:33:28.296217 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" event={"ID":"1d4c3178-0a9a-44b3-b956-7d3024661593","Type":"ContainerDied","Data":"c275bdb92ddc262f7c7fa026fd9654fe49a792a9bdc1e8fd1d4cf678dbd59511"} Nov 25 07:33:28 crc kubenswrapper[5043]: I1125 07:33:28.578679 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" podUID="1d4c3178-0a9a-44b3-b956-7d3024661593" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: connect: connection refused" Nov 25 07:33:32 crc kubenswrapper[5043]: E1125 07:33:32.382827 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057" Nov 25 07:33:32 crc kubenswrapper[5043]: E1125 07:33:32.383791 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n76hb7h64ch6ch586hfdh589h559hc4h9hd7h554h5dch85h7hfch648h679h699h55h5dbh5bdh85h566h5ffh657h8ch659h66dh5d7h9ch5cdq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rn2wq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-56fc7df589-k57nl_openstack(fe721f71-ddcd-45a4-9625-f8f41e91ac8b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 07:33:32 crc kubenswrapper[5043]: E1125 07:33:32.390777 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057\\\"\"]" pod="openstack/horizon-56fc7df589-k57nl" podUID="fe721f71-ddcd-45a4-9625-f8f41e91ac8b" Nov 25 07:33:32 crc kubenswrapper[5043]: E1125 07:33:32.397718 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057" Nov 25 07:33:32 crc kubenswrapper[5043]: E1125 07:33:32.397903 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n54chddh76h65h64bhc9h569h558h67dhf8h5f9hf9h58dh588hfh66dh67ch5fh5f7h565h577h5hfbh666h659h578h58fh649h55h7fh8ch97q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-js899,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7b99879f8c-ftjwm_openstack(ed3bc362-7637-4288-b028-62e7d813bba0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 07:33:32 crc kubenswrapper[5043]: E1125 07:33:32.403322 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057\\\"\"]" pod="openstack/horizon-7b99879f8c-ftjwm" podUID="ed3bc362-7637-4288-b028-62e7d813bba0" Nov 25 07:33:33 crc kubenswrapper[5043]: E1125 07:33:33.820849 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api@sha256:7dd2e0dbb6bb5a6cecd1763e43479ca8cb6a0c502534e83c8795c0da2b50e099" Nov 25 07:33:33 crc kubenswrapper[5043]: E1125 07:33:33.821343 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:7dd2e0dbb6bb5a6cecd1763e43479ca8cb6a0c502534e83c8795c0da2b50e099,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8ctt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-zvlrd_openstack(e9a33b7c-0771-42ee-b50d-abb6120f7fba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 07:33:33 crc kubenswrapper[5043]: E1125 07:33:33.822624 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-zvlrd" podUID="e9a33b7c-0771-42ee-b50d-abb6120f7fba" Nov 25 07:33:33 crc kubenswrapper[5043]: E1125 07:33:33.846541 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057" Nov 25 07:33:33 crc kubenswrapper[5043]: E1125 07:33:33.846749 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h565h588h5b7h68fhc9h5cfh5chbfhfh74h5dfh659h68fh8fh655h5d4h68ch5bch68dh65bh5fbh76h4h666h65ch79h586h97h79h5ch67bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp2wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6cf546d8d9-mh4vw_openstack(1bdacce3-f6fa-45c8-be3b-b7473af1a8ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 07:33:33 crc kubenswrapper[5043]: E1125 07:33:33.849029 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:65b94ff9fcd486845fb0544583bf2a973246a61a0ad32340fb92d632285f1057\\\"\"]" pod="openstack/horizon-6cf546d8d9-mh4vw" podUID="1bdacce3-f6fa-45c8-be3b-b7473af1a8ed" Nov 25 07:33:33 crc kubenswrapper[5043]: I1125 07:33:33.877947 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:33 crc kubenswrapper[5043]: I1125 07:33:33.969757 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-credential-keys\") pod \"09506302-add3-4161-8e13-c1c43c3e2b0f\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " Nov 25 07:33:33 crc kubenswrapper[5043]: I1125 07:33:33.969832 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-fernet-keys\") pod \"09506302-add3-4161-8e13-c1c43c3e2b0f\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " Nov 25 07:33:33 crc kubenswrapper[5043]: I1125 07:33:33.969894 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-scripts\") pod \"09506302-add3-4161-8e13-c1c43c3e2b0f\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " Nov 25 07:33:33 crc kubenswrapper[5043]: I1125 07:33:33.969934 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-combined-ca-bundle\") pod \"09506302-add3-4161-8e13-c1c43c3e2b0f\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " Nov 25 07:33:33 crc kubenswrapper[5043]: I1125 07:33:33.969991 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvblh\" (UniqueName: \"kubernetes.io/projected/09506302-add3-4161-8e13-c1c43c3e2b0f-kube-api-access-fvblh\") pod \"09506302-add3-4161-8e13-c1c43c3e2b0f\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " Nov 25 07:33:33 crc kubenswrapper[5043]: I1125 07:33:33.970096 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-config-data\") pod \"09506302-add3-4161-8e13-c1c43c3e2b0f\" (UID: \"09506302-add3-4161-8e13-c1c43c3e2b0f\") " Nov 25 07:33:33 crc kubenswrapper[5043]: I1125 07:33:33.976043 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "09506302-add3-4161-8e13-c1c43c3e2b0f" (UID: "09506302-add3-4161-8e13-c1c43c3e2b0f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:33:33 crc kubenswrapper[5043]: I1125 07:33:33.976079 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09506302-add3-4161-8e13-c1c43c3e2b0f-kube-api-access-fvblh" (OuterVolumeSpecName: "kube-api-access-fvblh") pod "09506302-add3-4161-8e13-c1c43c3e2b0f" (UID: "09506302-add3-4161-8e13-c1c43c3e2b0f"). InnerVolumeSpecName "kube-api-access-fvblh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:33:33 crc kubenswrapper[5043]: I1125 07:33:33.976320 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-scripts" (OuterVolumeSpecName: "scripts") pod "09506302-add3-4161-8e13-c1c43c3e2b0f" (UID: "09506302-add3-4161-8e13-c1c43c3e2b0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:33:33 crc kubenswrapper[5043]: I1125 07:33:33.976694 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "09506302-add3-4161-8e13-c1c43c3e2b0f" (UID: "09506302-add3-4161-8e13-c1c43c3e2b0f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:33:33 crc kubenswrapper[5043]: I1125 07:33:33.994475 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-config-data" (OuterVolumeSpecName: "config-data") pod "09506302-add3-4161-8e13-c1c43c3e2b0f" (UID: "09506302-add3-4161-8e13-c1c43c3e2b0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:33:33 crc kubenswrapper[5043]: I1125 07:33:33.997369 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09506302-add3-4161-8e13-c1c43c3e2b0f" (UID: "09506302-add3-4161-8e13-c1c43c3e2b0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:33:34 crc kubenswrapper[5043]: I1125 07:33:34.072138 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:34 crc kubenswrapper[5043]: I1125 07:33:34.072169 5043 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:34 crc kubenswrapper[5043]: I1125 07:33:34.072179 5043 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:34 crc kubenswrapper[5043]: I1125 07:33:34.072189 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:34 crc kubenswrapper[5043]: I1125 07:33:34.072199 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09506302-add3-4161-8e13-c1c43c3e2b0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:34 crc kubenswrapper[5043]: I1125 07:33:34.072208 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvblh\" (UniqueName: \"kubernetes.io/projected/09506302-add3-4161-8e13-c1c43c3e2b0f-kube-api-access-fvblh\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:34 crc kubenswrapper[5043]: I1125 07:33:34.356903 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hvtdd" event={"ID":"09506302-add3-4161-8e13-c1c43c3e2b0f","Type":"ContainerDied","Data":"8a9a4c356f3f9be1342fd7f50c8a82679ecc9c01b7dcda43f524012785bf8697"} Nov 25 07:33:34 crc kubenswrapper[5043]: I1125 07:33:34.357364 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a9a4c356f3f9be1342fd7f50c8a82679ecc9c01b7dcda43f524012785bf8697" Nov 25 07:33:34 crc kubenswrapper[5043]: I1125 07:33:34.356980 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hvtdd" Nov 25 07:33:34 crc kubenswrapper[5043]: E1125 07:33:34.357972 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api@sha256:7dd2e0dbb6bb5a6cecd1763e43479ca8cb6a0c502534e83c8795c0da2b50e099\\\"\"" pod="openstack/placement-db-sync-zvlrd" podUID="e9a33b7c-0771-42ee-b50d-abb6120f7fba" Nov 25 07:33:34 crc kubenswrapper[5043]: E1125 07:33:34.388701 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645" Nov 25 07:33:34 crc kubenswrapper[5043]: E1125 07:33:34.388913 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vsrt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-2zt29_openstack(66bda068-47b7-46f6-a75e-97dd76293fe9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 07:33:34 crc kubenswrapper[5043]: E1125 07:33:34.390277 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-2zt29" podUID="66bda068-47b7-46f6-a75e-97dd76293fe9" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.069157 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hvtdd"] Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.076804 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hvtdd"] Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.157371 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pcdfx"] Nov 25 07:33:35 crc kubenswrapper[5043]: E1125 07:33:35.157787 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09506302-add3-4161-8e13-c1c43c3e2b0f" containerName="keystone-bootstrap" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.157808 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="09506302-add3-4161-8e13-c1c43c3e2b0f" containerName="keystone-bootstrap" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.158032 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="09506302-add3-4161-8e13-c1c43c3e2b0f" containerName="keystone-bootstrap" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.158655 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.162145 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.162330 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.162469 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.163291 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.164170 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xt29v" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.170294 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pcdfx"] Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.291925 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-config-data\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.292257 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-scripts\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.292294 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhs6b\" (UniqueName: \"kubernetes.io/projected/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-kube-api-access-lhs6b\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.292356 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-credential-keys\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.292409 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-fernet-keys\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.292437 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-combined-ca-bundle\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: E1125 07:33:35.366561 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645\\\"\"" pod="openstack/barbican-db-sync-2zt29" podUID="66bda068-47b7-46f6-a75e-97dd76293fe9" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.394076 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-scripts\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.394133 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhs6b\" (UniqueName: \"kubernetes.io/projected/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-kube-api-access-lhs6b\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.394191 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-credential-keys\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.394234 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-fernet-keys\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.394255 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-combined-ca-bundle\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.394304 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-config-data\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.400688 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-scripts\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.401962 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-config-data\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.402249 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-credential-keys\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.406325 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-combined-ca-bundle\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.415074 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhs6b\" (UniqueName: \"kubernetes.io/projected/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-kube-api-access-lhs6b\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.448662 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-fernet-keys\") pod \"keystone-bootstrap-pcdfx\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:35 crc kubenswrapper[5043]: I1125 07:33:35.484157 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:33:36 crc kubenswrapper[5043]: I1125 07:33:36.979442 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09506302-add3-4161-8e13-c1c43c3e2b0f" path="/var/lib/kubelet/pods/09506302-add3-4161-8e13-c1c43c3e2b0f/volumes" Nov 25 07:33:38 crc kubenswrapper[5043]: I1125 07:33:38.579517 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" podUID="1d4c3178-0a9a-44b3-b956-7d3024661593" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Nov 25 07:33:43 crc kubenswrapper[5043]: I1125 07:33:43.579997 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" podUID="1d4c3178-0a9a-44b3-b956-7d3024661593" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Nov 25 07:33:43 crc kubenswrapper[5043]: I1125 07:33:43.580820 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.467055 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" event={"ID":"1d4c3178-0a9a-44b3-b956-7d3024661593","Type":"ContainerDied","Data":"036facbab21f88be321066c0f60121ebc6207106141f2688714397e00f8ca1ff"} Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.467408 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="036facbab21f88be321066c0f60121ebc6207106141f2688714397e00f8ca1ff" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.577714 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.587871 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.591007 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.596419 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687012 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-horizon-secret-key\") pod \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687074 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-ovsdbserver-sb\") pod \"1d4c3178-0a9a-44b3-b956-7d3024661593\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687106 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6grr\" (UniqueName: \"kubernetes.io/projected/1d4c3178-0a9a-44b3-b956-7d3024661593-kube-api-access-s6grr\") pod \"1d4c3178-0a9a-44b3-b956-7d3024661593\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687136 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-logs\") pod \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687157 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed3bc362-7637-4288-b028-62e7d813bba0-scripts\") pod \"ed3bc362-7637-4288-b028-62e7d813bba0\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687183 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-dns-svc\") pod \"1d4c3178-0a9a-44b3-b956-7d3024661593\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687196 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-scripts\") pod \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687218 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-config-data\") pod \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687238 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp2wz\" (UniqueName: \"kubernetes.io/projected/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-kube-api-access-jp2wz\") pod \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687255 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-horizon-secret-key\") pod \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687277 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed3bc362-7637-4288-b028-62e7d813bba0-logs\") pod \"ed3bc362-7637-4288-b028-62e7d813bba0\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687297 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-scripts\") pod \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687329 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-config-data\") pod \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687361 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-logs\") pod \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\" (UID: \"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687386 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-ovsdbserver-nb\") pod \"1d4c3178-0a9a-44b3-b956-7d3024661593\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687406 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed3bc362-7637-4288-b028-62e7d813bba0-config-data\") pod \"ed3bc362-7637-4288-b028-62e7d813bba0\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687422 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js899\" (UniqueName: \"kubernetes.io/projected/ed3bc362-7637-4288-b028-62e7d813bba0-kube-api-access-js899\") pod \"ed3bc362-7637-4288-b028-62e7d813bba0\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687438 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-config\") pod \"1d4c3178-0a9a-44b3-b956-7d3024661593\" (UID: \"1d4c3178-0a9a-44b3-b956-7d3024661593\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687454 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed3bc362-7637-4288-b028-62e7d813bba0-horizon-secret-key\") pod \"ed3bc362-7637-4288-b028-62e7d813bba0\" (UID: \"ed3bc362-7637-4288-b028-62e7d813bba0\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.687479 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn2wq\" (UniqueName: \"kubernetes.io/projected/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-kube-api-access-rn2wq\") pod \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\" (UID: \"fe721f71-ddcd-45a4-9625-f8f41e91ac8b\") " Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.688364 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed3bc362-7637-4288-b028-62e7d813bba0-scripts" (OuterVolumeSpecName: "scripts") pod "ed3bc362-7637-4288-b028-62e7d813bba0" (UID: "ed3bc362-7637-4288-b028-62e7d813bba0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.688405 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-scripts" (OuterVolumeSpecName: "scripts") pod "1bdacce3-f6fa-45c8-be3b-b7473af1a8ed" (UID: "1bdacce3-f6fa-45c8-be3b-b7473af1a8ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.689212 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed3bc362-7637-4288-b028-62e7d813bba0-logs" (OuterVolumeSpecName: "logs") pod "ed3bc362-7637-4288-b028-62e7d813bba0" (UID: "ed3bc362-7637-4288-b028-62e7d813bba0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.689363 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-logs" (OuterVolumeSpecName: "logs") pod "1bdacce3-f6fa-45c8-be3b-b7473af1a8ed" (UID: "1bdacce3-f6fa-45c8-be3b-b7473af1a8ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.689677 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-config-data" (OuterVolumeSpecName: "config-data") pod "1bdacce3-f6fa-45c8-be3b-b7473af1a8ed" (UID: "1bdacce3-f6fa-45c8-be3b-b7473af1a8ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.689879 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-logs" (OuterVolumeSpecName: "logs") pod "fe721f71-ddcd-45a4-9625-f8f41e91ac8b" (UID: "fe721f71-ddcd-45a4-9625-f8f41e91ac8b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.690589 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-config-data" (OuterVolumeSpecName: "config-data") pod "fe721f71-ddcd-45a4-9625-f8f41e91ac8b" (UID: "fe721f71-ddcd-45a4-9625-f8f41e91ac8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.691960 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-kube-api-access-rn2wq" (OuterVolumeSpecName: "kube-api-access-rn2wq") pod "fe721f71-ddcd-45a4-9625-f8f41e91ac8b" (UID: "fe721f71-ddcd-45a4-9625-f8f41e91ac8b"). InnerVolumeSpecName "kube-api-access-rn2wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.692788 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-scripts" (OuterVolumeSpecName: "scripts") pod "fe721f71-ddcd-45a4-9625-f8f41e91ac8b" (UID: "fe721f71-ddcd-45a4-9625-f8f41e91ac8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.693096 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed3bc362-7637-4288-b028-62e7d813bba0-config-data" (OuterVolumeSpecName: "config-data") pod "ed3bc362-7637-4288-b028-62e7d813bba0" (UID: "ed3bc362-7637-4288-b028-62e7d813bba0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.693431 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d4c3178-0a9a-44b3-b956-7d3024661593-kube-api-access-s6grr" (OuterVolumeSpecName: "kube-api-access-s6grr") pod "1d4c3178-0a9a-44b3-b956-7d3024661593" (UID: "1d4c3178-0a9a-44b3-b956-7d3024661593"). InnerVolumeSpecName "kube-api-access-s6grr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.693647 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-kube-api-access-jp2wz" (OuterVolumeSpecName: "kube-api-access-jp2wz") pod "1bdacce3-f6fa-45c8-be3b-b7473af1a8ed" (UID: "1bdacce3-f6fa-45c8-be3b-b7473af1a8ed"). InnerVolumeSpecName "kube-api-access-jp2wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.693890 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1bdacce3-f6fa-45c8-be3b-b7473af1a8ed" (UID: "1bdacce3-f6fa-45c8-be3b-b7473af1a8ed"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.694294 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed3bc362-7637-4288-b028-62e7d813bba0-kube-api-access-js899" (OuterVolumeSpecName: "kube-api-access-js899") pod "ed3bc362-7637-4288-b028-62e7d813bba0" (UID: "ed3bc362-7637-4288-b028-62e7d813bba0"). InnerVolumeSpecName "kube-api-access-js899". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.694539 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed3bc362-7637-4288-b028-62e7d813bba0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ed3bc362-7637-4288-b028-62e7d813bba0" (UID: "ed3bc362-7637-4288-b028-62e7d813bba0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.703759 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "fe721f71-ddcd-45a4-9625-f8f41e91ac8b" (UID: "fe721f71-ddcd-45a4-9625-f8f41e91ac8b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.732635 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-config" (OuterVolumeSpecName: "config") pod "1d4c3178-0a9a-44b3-b956-7d3024661593" (UID: "1d4c3178-0a9a-44b3-b956-7d3024661593"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.732760 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d4c3178-0a9a-44b3-b956-7d3024661593" (UID: "1d4c3178-0a9a-44b3-b956-7d3024661593"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.747994 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d4c3178-0a9a-44b3-b956-7d3024661593" (UID: "1d4c3178-0a9a-44b3-b956-7d3024661593"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.748139 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d4c3178-0a9a-44b3-b956-7d3024661593" (UID: "1d4c3178-0a9a-44b3-b956-7d3024661593"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789509 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789547 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed3bc362-7637-4288-b028-62e7d813bba0-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789582 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js899\" (UniqueName: \"kubernetes.io/projected/ed3bc362-7637-4288-b028-62e7d813bba0-kube-api-access-js899\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789597 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789631 5043 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed3bc362-7637-4288-b028-62e7d813bba0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789660 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn2wq\" (UniqueName: \"kubernetes.io/projected/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-kube-api-access-rn2wq\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789669 5043 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789677 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789685 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6grr\" (UniqueName: \"kubernetes.io/projected/1d4c3178-0a9a-44b3-b956-7d3024661593-kube-api-access-s6grr\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789695 5043 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789703 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed3bc362-7637-4288-b028-62e7d813bba0-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789710 5043 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d4c3178-0a9a-44b3-b956-7d3024661593-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789718 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789725 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789733 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp2wz\" (UniqueName: \"kubernetes.io/projected/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-kube-api-access-jp2wz\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789741 5043 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789748 5043 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed3bc362-7637-4288-b028-62e7d813bba0-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789756 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe721f71-ddcd-45a4-9625-f8f41e91ac8b-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789763 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:44 crc kubenswrapper[5043]: I1125 07:33:44.789771 5043 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:33:45 crc kubenswrapper[5043]: I1125 07:33:45.476018 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56fc7df589-k57nl" event={"ID":"fe721f71-ddcd-45a4-9625-f8f41e91ac8b","Type":"ContainerDied","Data":"beaa4cb360260782c03b9cac0a48b02d31f013928df99333a9c658790154c5a1"} Nov 25 07:33:45 crc kubenswrapper[5043]: I1125 07:33:45.476040 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56fc7df589-k57nl" Nov 25 07:33:45 crc kubenswrapper[5043]: I1125 07:33:45.477473 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b99879f8c-ftjwm" event={"ID":"ed3bc362-7637-4288-b028-62e7d813bba0","Type":"ContainerDied","Data":"21bad1fec766a2caa5c99ae50a19295057a802260a5783dd054f56630351523e"} Nov 25 07:33:45 crc kubenswrapper[5043]: I1125 07:33:45.477565 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b99879f8c-ftjwm" Nov 25 07:33:45 crc kubenswrapper[5043]: I1125 07:33:45.479880 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" Nov 25 07:33:45 crc kubenswrapper[5043]: I1125 07:33:45.479897 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cf546d8d9-mh4vw" Nov 25 07:33:45 crc kubenswrapper[5043]: I1125 07:33:45.479882 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cf546d8d9-mh4vw" event={"ID":"1bdacce3-f6fa-45c8-be3b-b7473af1a8ed","Type":"ContainerDied","Data":"f1f7b0ee2950b4219aa64f4f3a1775e45c02b78427b1c668c0784d8828ecde2d"} Nov 25 07:33:45 crc kubenswrapper[5043]: I1125 07:33:45.531654 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56fc7df589-k57nl"] Nov 25 07:33:45 crc kubenswrapper[5043]: I1125 07:33:45.537506 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-56fc7df589-k57nl"] Nov 25 07:33:45 crc kubenswrapper[5043]: I1125 07:33:45.552421 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b58765b5-wwz57"] Nov 25 07:33:45 crc kubenswrapper[5043]: I1125 07:33:45.560787 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75b58765b5-wwz57"] Nov 25 07:33:45 crc kubenswrapper[5043]: I1125 07:33:45.585403 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b99879f8c-ftjwm"] Nov 25 07:33:45 crc kubenswrapper[5043]: I1125 07:33:45.591981 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7b99879f8c-ftjwm"] Nov 25 07:33:45 crc kubenswrapper[5043]: I1125 07:33:45.605670 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6cf546d8d9-mh4vw"] Nov 25 07:33:45 crc kubenswrapper[5043]: I1125 07:33:45.611270 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6cf546d8d9-mh4vw"] Nov 25 07:33:46 crc kubenswrapper[5043]: E1125 07:33:46.793745 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879" Nov 25 07:33:46 crc kubenswrapper[5043]: E1125 07:33:46.794281 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lkch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-jzr78_openstack(2de64291-b46f-4ba3-bdec-a3bad5873881): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 07:33:46 crc kubenswrapper[5043]: E1125 07:33:46.795498 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-jzr78" podUID="2de64291-b46f-4ba3-bdec-a3bad5873881" Nov 25 07:33:46 crc kubenswrapper[5043]: I1125 07:33:46.924349 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f67c4b5d4-f96jj"] Nov 25 07:33:46 crc kubenswrapper[5043]: I1125 07:33:46.981790 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bdacce3-f6fa-45c8-be3b-b7473af1a8ed" path="/var/lib/kubelet/pods/1bdacce3-f6fa-45c8-be3b-b7473af1a8ed/volumes" Nov 25 07:33:46 crc kubenswrapper[5043]: I1125 07:33:46.982541 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d4c3178-0a9a-44b3-b956-7d3024661593" path="/var/lib/kubelet/pods/1d4c3178-0a9a-44b3-b956-7d3024661593/volumes" Nov 25 07:33:47 crc kubenswrapper[5043]: I1125 07:33:47.000887 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed3bc362-7637-4288-b028-62e7d813bba0" path="/var/lib/kubelet/pods/ed3bc362-7637-4288-b028-62e7d813bba0/volumes" Nov 25 07:33:47 crc kubenswrapper[5043]: I1125 07:33:47.019517 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe721f71-ddcd-45a4-9625-f8f41e91ac8b" path="/var/lib/kubelet/pods/fe721f71-ddcd-45a4-9625-f8f41e91ac8b/volumes" Nov 25 07:33:47 crc kubenswrapper[5043]: W1125 07:33:47.031532 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ecb6236_f0c1_4042_ad6e_4bcd6c5ab423.slice/crio-948b7d41203a9b6183cf8c287b60eeb3fe6eb4da2a0b42be312b474516c500b7 WatchSource:0}: Error finding container 948b7d41203a9b6183cf8c287b60eeb3fe6eb4da2a0b42be312b474516c500b7: Status 404 returned error can't find the container with id 948b7d41203a9b6183cf8c287b60eeb3fe6eb4da2a0b42be312b474516c500b7 Nov 25 07:33:47 crc kubenswrapper[5043]: I1125 07:33:47.036368 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pcdfx"] Nov 25 07:33:47 crc kubenswrapper[5043]: I1125 07:33:47.038114 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 07:33:47 crc kubenswrapper[5043]: I1125 07:33:47.045142 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b5bc7cfb-2sfts"] Nov 25 07:33:47 crc kubenswrapper[5043]: I1125 07:33:47.507849 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9","Type":"ContainerStarted","Data":"454fdb2f0f73c6597e7069e12b806804c5e1271ca9ad8bd486816d448f18de78"} Nov 25 07:33:47 crc kubenswrapper[5043]: I1125 07:33:47.513219 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f67c4b5d4-f96jj" event={"ID":"13e8a8ee-bfe8-415b-b76f-89d7d7296659","Type":"ContainerStarted","Data":"a5cd052f704d0e4552f5c907a798795d8635067d1a85aff22fe60625adb9c58e"} Nov 25 07:33:47 crc kubenswrapper[5043]: I1125 07:33:47.514353 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b5bc7cfb-2sfts" event={"ID":"9955ab7e-1d74-461a-a9b2-73e9f82d48fe","Type":"ContainerStarted","Data":"8ad04e3855a8b7856595c75fe8b6095c1a1bffe4d3086fa6e63258cf98523c3b"} Nov 25 07:33:47 crc kubenswrapper[5043]: I1125 07:33:47.516848 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pcdfx" event={"ID":"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423","Type":"ContainerStarted","Data":"948b7d41203a9b6183cf8c287b60eeb3fe6eb4da2a0b42be312b474516c500b7"} Nov 25 07:33:48 crc kubenswrapper[5043]: I1125 07:33:48.581228 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75b58765b5-wwz57" podUID="1d4c3178-0a9a-44b3-b956-7d3024661593" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Nov 25 07:33:55 crc kubenswrapper[5043]: E1125 07:33:55.035884 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879\\\"\"" pod="openstack/cinder-db-sync-jzr78" podUID="2de64291-b46f-4ba3-bdec-a3bad5873881" Nov 25 07:33:55 crc kubenswrapper[5043]: I1125 07:33:55.600779 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pcdfx" event={"ID":"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423","Type":"ContainerStarted","Data":"c3f7028217a5618f8744e16c74ebc4b7c1011405e151cb03269b944e2b5a5dbc"} Nov 25 07:33:55 crc kubenswrapper[5043]: I1125 07:33:55.606364 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2zt29" event={"ID":"66bda068-47b7-46f6-a75e-97dd76293fe9","Type":"ContainerStarted","Data":"c42c8bed9b1a3aed9759f3c09cca6adbf2b372034087590476c5f0f374ae8722"} Nov 25 07:33:55 crc kubenswrapper[5043]: I1125 07:33:55.608301 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f67c4b5d4-f96jj" event={"ID":"13e8a8ee-bfe8-415b-b76f-89d7d7296659","Type":"ContainerStarted","Data":"a890089a494ffcc5b0ad8d007e0c40d78b8d68101b50d4353ca1bf921da32e46"} Nov 25 07:33:55 crc kubenswrapper[5043]: I1125 07:33:55.609653 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b5bc7cfb-2sfts" event={"ID":"9955ab7e-1d74-461a-a9b2-73e9f82d48fe","Type":"ContainerStarted","Data":"7c5e903c6193703b2095fdf682cae9d61dc7437177c9fae106df613eaa3dfb94"} Nov 25 07:33:55 crc kubenswrapper[5043]: I1125 07:33:55.610971 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zvlrd" event={"ID":"e9a33b7c-0771-42ee-b50d-abb6120f7fba","Type":"ContainerStarted","Data":"ea4387a89fee869b2d4faf2ae8f63191ac27bf811d1a6ac017be83593f0221d4"} Nov 25 07:33:55 crc kubenswrapper[5043]: I1125 07:33:55.626176 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pcdfx" podStartSLOduration=20.626155608 podStartE2EDuration="20.626155608s" podCreationTimestamp="2025-11-25 07:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:33:55.623809615 +0000 UTC m=+1099.792005336" watchObservedRunningTime="2025-11-25 07:33:55.626155608 +0000 UTC m=+1099.794351339" Nov 25 07:33:55 crc kubenswrapper[5043]: I1125 07:33:55.646380 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-zvlrd" podStartSLOduration=2.575157003 podStartE2EDuration="39.646357879s" podCreationTimestamp="2025-11-25 07:33:16 +0000 UTC" firstStartedPulling="2025-11-25 07:33:18.087254776 +0000 UTC m=+1062.255450497" lastFinishedPulling="2025-11-25 07:33:55.158455652 +0000 UTC m=+1099.326651373" observedRunningTime="2025-11-25 07:33:55.642025383 +0000 UTC m=+1099.810221124" watchObservedRunningTime="2025-11-25 07:33:55.646357879 +0000 UTC m=+1099.814553600" Nov 25 07:33:55 crc kubenswrapper[5043]: I1125 07:33:55.664821 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-2zt29" podStartSLOduration=2.548816575 podStartE2EDuration="39.664626168s" podCreationTimestamp="2025-11-25 07:33:16 +0000 UTC" firstStartedPulling="2025-11-25 07:33:17.9306985 +0000 UTC m=+1062.098894221" lastFinishedPulling="2025-11-25 07:33:55.046508083 +0000 UTC m=+1099.214703814" observedRunningTime="2025-11-25 07:33:55.657944419 +0000 UTC m=+1099.826140140" watchObservedRunningTime="2025-11-25 07:33:55.664626168 +0000 UTC m=+1099.832821889" Nov 25 07:33:56 crc kubenswrapper[5043]: I1125 07:33:56.626146 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9","Type":"ContainerStarted","Data":"5835f75e81ebc5781555280ba2b8c5a0b9c0b4cda0beaeced5e4309980a2fe70"} Nov 25 07:33:56 crc kubenswrapper[5043]: I1125 07:33:56.630032 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f67c4b5d4-f96jj" event={"ID":"13e8a8ee-bfe8-415b-b76f-89d7d7296659","Type":"ContainerStarted","Data":"bfc2eed39c8eaf3dbd60eb199291ca1000765933e0f5031ac80942228e195e30"} Nov 25 07:33:56 crc kubenswrapper[5043]: I1125 07:33:56.645527 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b5bc7cfb-2sfts" event={"ID":"9955ab7e-1d74-461a-a9b2-73e9f82d48fe","Type":"ContainerStarted","Data":"97259a18989a2bb9b137543868fb1784b4ec1f8d5e0c1ed6ac4074b3fb57c7c5"} Nov 25 07:33:56 crc kubenswrapper[5043]: I1125 07:33:56.665933 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5f67c4b5d4-f96jj" podStartSLOduration=23.442324826 podStartE2EDuration="31.665904418s" podCreationTimestamp="2025-11-25 07:33:25 +0000 UTC" firstStartedPulling="2025-11-25 07:33:46.936449263 +0000 UTC m=+1091.104644974" lastFinishedPulling="2025-11-25 07:33:55.160028855 +0000 UTC m=+1099.328224566" observedRunningTime="2025-11-25 07:33:56.657304817 +0000 UTC m=+1100.825500538" watchObservedRunningTime="2025-11-25 07:33:56.665904418 +0000 UTC m=+1100.834100139" Nov 25 07:33:56 crc kubenswrapper[5043]: I1125 07:33:56.680163 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-b5bc7cfb-2sfts" podStartSLOduration=23.558189373 podStartE2EDuration="31.68014717s" podCreationTimestamp="2025-11-25 07:33:25 +0000 UTC" firstStartedPulling="2025-11-25 07:33:47.036358852 +0000 UTC m=+1091.204554563" lastFinishedPulling="2025-11-25 07:33:55.158316639 +0000 UTC m=+1099.326512360" observedRunningTime="2025-11-25 07:33:56.67679237 +0000 UTC m=+1100.844988101" watchObservedRunningTime="2025-11-25 07:33:56.68014717 +0000 UTC m=+1100.848342891" Nov 25 07:33:58 crc kubenswrapper[5043]: I1125 07:33:58.665981 5043 generic.go:334] "Generic (PLEG): container finished" podID="5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423" containerID="c3f7028217a5618f8744e16c74ebc4b7c1011405e151cb03269b944e2b5a5dbc" exitCode=0 Nov 25 07:33:58 crc kubenswrapper[5043]: I1125 07:33:58.666048 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pcdfx" event={"ID":"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423","Type":"ContainerDied","Data":"c3f7028217a5618f8744e16c74ebc4b7c1011405e151cb03269b944e2b5a5dbc"} Nov 25 07:33:59 crc kubenswrapper[5043]: I1125 07:33:59.675575 5043 generic.go:334] "Generic (PLEG): container finished" podID="e9a33b7c-0771-42ee-b50d-abb6120f7fba" containerID="ea4387a89fee869b2d4faf2ae8f63191ac27bf811d1a6ac017be83593f0221d4" exitCode=0 Nov 25 07:33:59 crc kubenswrapper[5043]: I1125 07:33:59.675655 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zvlrd" event={"ID":"e9a33b7c-0771-42ee-b50d-abb6120f7fba","Type":"ContainerDied","Data":"ea4387a89fee869b2d4faf2ae8f63191ac27bf811d1a6ac017be83593f0221d4"} Nov 25 07:33:59 crc kubenswrapper[5043]: I1125 07:33:59.679544 5043 generic.go:334] "Generic (PLEG): container finished" podID="66bda068-47b7-46f6-a75e-97dd76293fe9" containerID="c42c8bed9b1a3aed9759f3c09cca6adbf2b372034087590476c5f0f374ae8722" exitCode=0 Nov 25 07:33:59 crc kubenswrapper[5043]: I1125 07:33:59.679642 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2zt29" event={"ID":"66bda068-47b7-46f6-a75e-97dd76293fe9","Type":"ContainerDied","Data":"c42c8bed9b1a3aed9759f3c09cca6adbf2b372034087590476c5f0f374ae8722"} Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.049076 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.109157 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-scripts\") pod \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.109240 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-combined-ca-bundle\") pod \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.109309 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhs6b\" (UniqueName: \"kubernetes.io/projected/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-kube-api-access-lhs6b\") pod \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.109425 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-config-data\") pod \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.109503 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-credential-keys\") pod \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.109529 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-fernet-keys\") pod \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\" (UID: \"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423\") " Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.115872 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-scripts" (OuterVolumeSpecName: "scripts") pod "5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423" (UID: "5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.115872 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-kube-api-access-lhs6b" (OuterVolumeSpecName: "kube-api-access-lhs6b") pod "5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423" (UID: "5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423"). InnerVolumeSpecName "kube-api-access-lhs6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.120817 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423" (UID: "5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.120946 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423" (UID: "5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.125381 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2zt29" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.131764 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zvlrd" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.135844 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-config-data" (OuterVolumeSpecName: "config-data") pod "5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423" (UID: "5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.144581 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423" (UID: "5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.211117 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bda068-47b7-46f6-a75e-97dd76293fe9-combined-ca-bundle\") pod \"66bda068-47b7-46f6-a75e-97dd76293fe9\" (UID: \"66bda068-47b7-46f6-a75e-97dd76293fe9\") " Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.211215 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8ctt\" (UniqueName: \"kubernetes.io/projected/e9a33b7c-0771-42ee-b50d-abb6120f7fba-kube-api-access-r8ctt\") pod \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.211247 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-combined-ca-bundle\") pod \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.211313 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-config-data\") pod \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.211344 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsrt8\" (UniqueName: \"kubernetes.io/projected/66bda068-47b7-46f6-a75e-97dd76293fe9-kube-api-access-vsrt8\") pod \"66bda068-47b7-46f6-a75e-97dd76293fe9\" (UID: \"66bda068-47b7-46f6-a75e-97dd76293fe9\") " Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.211421 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9a33b7c-0771-42ee-b50d-abb6120f7fba-logs\") pod \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.211488 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-scripts\") pod \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\" (UID: \"e9a33b7c-0771-42ee-b50d-abb6120f7fba\") " Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.211540 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/66bda068-47b7-46f6-a75e-97dd76293fe9-db-sync-config-data\") pod \"66bda068-47b7-46f6-a75e-97dd76293fe9\" (UID: \"66bda068-47b7-46f6-a75e-97dd76293fe9\") " Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.212186 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9a33b7c-0771-42ee-b50d-abb6120f7fba-logs" (OuterVolumeSpecName: "logs") pod "e9a33b7c-0771-42ee-b50d-abb6120f7fba" (UID: "e9a33b7c-0771-42ee-b50d-abb6120f7fba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.212203 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.212411 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhs6b\" (UniqueName: \"kubernetes.io/projected/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-kube-api-access-lhs6b\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.212508 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.212594 5043 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.212696 5043 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.212786 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.214951 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66bda068-47b7-46f6-a75e-97dd76293fe9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "66bda068-47b7-46f6-a75e-97dd76293fe9" (UID: "66bda068-47b7-46f6-a75e-97dd76293fe9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.215423 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66bda068-47b7-46f6-a75e-97dd76293fe9-kube-api-access-vsrt8" (OuterVolumeSpecName: "kube-api-access-vsrt8") pod "66bda068-47b7-46f6-a75e-97dd76293fe9" (UID: "66bda068-47b7-46f6-a75e-97dd76293fe9"). InnerVolumeSpecName "kube-api-access-vsrt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.216476 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-scripts" (OuterVolumeSpecName: "scripts") pod "e9a33b7c-0771-42ee-b50d-abb6120f7fba" (UID: "e9a33b7c-0771-42ee-b50d-abb6120f7fba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.217820 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9a33b7c-0771-42ee-b50d-abb6120f7fba-kube-api-access-r8ctt" (OuterVolumeSpecName: "kube-api-access-r8ctt") pod "e9a33b7c-0771-42ee-b50d-abb6120f7fba" (UID: "e9a33b7c-0771-42ee-b50d-abb6120f7fba"). InnerVolumeSpecName "kube-api-access-r8ctt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.231758 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66bda068-47b7-46f6-a75e-97dd76293fe9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66bda068-47b7-46f6-a75e-97dd76293fe9" (UID: "66bda068-47b7-46f6-a75e-97dd76293fe9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.232323 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9a33b7c-0771-42ee-b50d-abb6120f7fba" (UID: "e9a33b7c-0771-42ee-b50d-abb6120f7fba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.234655 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-config-data" (OuterVolumeSpecName: "config-data") pod "e9a33b7c-0771-42ee-b50d-abb6120f7fba" (UID: "e9a33b7c-0771-42ee-b50d-abb6120f7fba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.314719 5043 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9a33b7c-0771-42ee-b50d-abb6120f7fba-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.314756 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.314767 5043 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/66bda068-47b7-46f6-a75e-97dd76293fe9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.314778 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66bda068-47b7-46f6-a75e-97dd76293fe9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.314788 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8ctt\" (UniqueName: \"kubernetes.io/projected/e9a33b7c-0771-42ee-b50d-abb6120f7fba-kube-api-access-r8ctt\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.314797 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.314806 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9a33b7c-0771-42ee-b50d-abb6120f7fba-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.314815 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsrt8\" (UniqueName: \"kubernetes.io/projected/66bda068-47b7-46f6-a75e-97dd76293fe9-kube-api-access-vsrt8\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.699786 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9","Type":"ContainerStarted","Data":"ca615206e43a29a00ef8cce621d97a07234623d0a1504cd9aa6d6d6c99e8c563"} Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.702475 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zvlrd" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.704256 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zvlrd" event={"ID":"e9a33b7c-0771-42ee-b50d-abb6120f7fba","Type":"ContainerDied","Data":"7405c92fe2375d8c40a1c5a0ffc72ff99f21d641e545d699a48084e7eac33774"} Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.704296 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7405c92fe2375d8c40a1c5a0ffc72ff99f21d641e545d699a48084e7eac33774" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.710453 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pcdfx" event={"ID":"5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423","Type":"ContainerDied","Data":"948b7d41203a9b6183cf8c287b60eeb3fe6eb4da2a0b42be312b474516c500b7"} Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.710520 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="948b7d41203a9b6183cf8c287b60eeb3fe6eb4da2a0b42be312b474516c500b7" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.710667 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pcdfx" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.732631 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2zt29" event={"ID":"66bda068-47b7-46f6-a75e-97dd76293fe9","Type":"ContainerDied","Data":"3307f1f8b33af40d5c4cc29b170e690fc448116ccfde35f37a1a408c3b3df5a5"} Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.732665 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3307f1f8b33af40d5c4cc29b170e690fc448116ccfde35f37a1a408c3b3df5a5" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.732711 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2zt29" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.787352 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-78c76fd9c4-8nvkz"] Nov 25 07:34:01 crc kubenswrapper[5043]: E1125 07:34:01.789186 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66bda068-47b7-46f6-a75e-97dd76293fe9" containerName="barbican-db-sync" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.789234 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="66bda068-47b7-46f6-a75e-97dd76293fe9" containerName="barbican-db-sync" Nov 25 07:34:01 crc kubenswrapper[5043]: E1125 07:34:01.789260 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4c3178-0a9a-44b3-b956-7d3024661593" containerName="init" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.789269 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4c3178-0a9a-44b3-b956-7d3024661593" containerName="init" Nov 25 07:34:01 crc kubenswrapper[5043]: E1125 07:34:01.789287 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4c3178-0a9a-44b3-b956-7d3024661593" containerName="dnsmasq-dns" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.789294 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4c3178-0a9a-44b3-b956-7d3024661593" containerName="dnsmasq-dns" Nov 25 07:34:01 crc kubenswrapper[5043]: E1125 07:34:01.789310 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a33b7c-0771-42ee-b50d-abb6120f7fba" containerName="placement-db-sync" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.789317 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a33b7c-0771-42ee-b50d-abb6120f7fba" containerName="placement-db-sync" Nov 25 07:34:01 crc kubenswrapper[5043]: E1125 07:34:01.789348 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423" containerName="keystone-bootstrap" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.789355 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423" containerName="keystone-bootstrap" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.789694 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="66bda068-47b7-46f6-a75e-97dd76293fe9" containerName="barbican-db-sync" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.789727 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d4c3178-0a9a-44b3-b956-7d3024661593" containerName="dnsmasq-dns" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.789748 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9a33b7c-0771-42ee-b50d-abb6120f7fba" containerName="placement-db-sync" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.789769 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423" containerName="keystone-bootstrap" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.790651 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.793373 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.794050 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.794184 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.794290 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-b5hzn" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.794400 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.807377 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78c76fd9c4-8nvkz"] Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.932757 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6b74d9cbc5-zq8tk"] Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.934032 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.948353 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ae8142b-9631-41b8-94ea-cad294cf0fbf-scripts\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.948453 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ae8142b-9631-41b8-94ea-cad294cf0fbf-logs\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.948475 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae8142b-9631-41b8-94ea-cad294cf0fbf-combined-ca-bundle\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.948502 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ae8142b-9631-41b8-94ea-cad294cf0fbf-config-data\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.948525 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae8142b-9631-41b8-94ea-cad294cf0fbf-internal-tls-certs\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.948582 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae8142b-9631-41b8-94ea-cad294cf0fbf-public-tls-certs\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.948656 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx6kw\" (UniqueName: \"kubernetes.io/projected/8ae8142b-9631-41b8-94ea-cad294cf0fbf-kube-api-access-qx6kw\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.949149 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.949406 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 07:34:01 crc kubenswrapper[5043]: I1125 07:34:01.949682 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-lvw85" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.001185 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-54895cb446-cqmz8"] Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.003034 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.006011 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.029341 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54895cb446-cqmz8"] Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.050256 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx6kw\" (UniqueName: \"kubernetes.io/projected/8ae8142b-9631-41b8-94ea-cad294cf0fbf-kube-api-access-qx6kw\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.050302 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e706e93d-1fc1-4969-b8ba-5ff803545131-combined-ca-bundle\") pod \"barbican-worker-6b74d9cbc5-zq8tk\" (UID: \"e706e93d-1fc1-4969-b8ba-5ff803545131\") " pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.050324 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e706e93d-1fc1-4969-b8ba-5ff803545131-config-data\") pod \"barbican-worker-6b74d9cbc5-zq8tk\" (UID: \"e706e93d-1fc1-4969-b8ba-5ff803545131\") " pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.050377 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e706e93d-1fc1-4969-b8ba-5ff803545131-config-data-custom\") pod \"barbican-worker-6b74d9cbc5-zq8tk\" (UID: \"e706e93d-1fc1-4969-b8ba-5ff803545131\") " pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.050408 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ae8142b-9631-41b8-94ea-cad294cf0fbf-scripts\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.050425 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e706e93d-1fc1-4969-b8ba-5ff803545131-logs\") pod \"barbican-worker-6b74d9cbc5-zq8tk\" (UID: \"e706e93d-1fc1-4969-b8ba-5ff803545131\") " pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.050464 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ae8142b-9631-41b8-94ea-cad294cf0fbf-logs\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.050481 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae8142b-9631-41b8-94ea-cad294cf0fbf-combined-ca-bundle\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.050502 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ae8142b-9631-41b8-94ea-cad294cf0fbf-config-data\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.050533 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97bcr\" (UniqueName: \"kubernetes.io/projected/e706e93d-1fc1-4969-b8ba-5ff803545131-kube-api-access-97bcr\") pod \"barbican-worker-6b74d9cbc5-zq8tk\" (UID: \"e706e93d-1fc1-4969-b8ba-5ff803545131\") " pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.050547 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae8142b-9631-41b8-94ea-cad294cf0fbf-internal-tls-certs\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.050580 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae8142b-9631-41b8-94ea-cad294cf0fbf-public-tls-certs\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.058152 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6b74d9cbc5-zq8tk"] Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.058801 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ae8142b-9631-41b8-94ea-cad294cf0fbf-logs\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.058991 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae8142b-9631-41b8-94ea-cad294cf0fbf-public-tls-certs\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.060578 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ae8142b-9631-41b8-94ea-cad294cf0fbf-config-data\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.062158 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ae8142b-9631-41b8-94ea-cad294cf0fbf-scripts\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.069307 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ae8142b-9631-41b8-94ea-cad294cf0fbf-internal-tls-certs\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.085407 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ae8142b-9631-41b8-94ea-cad294cf0fbf-combined-ca-bundle\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.101497 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d548b9b8f-hjpgv"] Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.102027 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx6kw\" (UniqueName: \"kubernetes.io/projected/8ae8142b-9631-41b8-94ea-cad294cf0fbf-kube-api-access-qx6kw\") pod \"placement-78c76fd9c4-8nvkz\" (UID: \"8ae8142b-9631-41b8-94ea-cad294cf0fbf\") " pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.102905 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.124991 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d548b9b8f-hjpgv"] Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.125724 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.151924 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e706e93d-1fc1-4969-b8ba-5ff803545131-combined-ca-bundle\") pod \"barbican-worker-6b74d9cbc5-zq8tk\" (UID: \"e706e93d-1fc1-4969-b8ba-5ff803545131\") " pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.174723 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e706e93d-1fc1-4969-b8ba-5ff803545131-config-data\") pod \"barbican-worker-6b74d9cbc5-zq8tk\" (UID: \"e706e93d-1fc1-4969-b8ba-5ff803545131\") " pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.174776 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rxfx\" (UniqueName: \"kubernetes.io/projected/f63f14b8-9c07-4267-aaa0-ceac1d775c2c-kube-api-access-8rxfx\") pod \"barbican-keystone-listener-54895cb446-cqmz8\" (UID: \"f63f14b8-9c07-4267-aaa0-ceac1d775c2c\") " pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.174839 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e706e93d-1fc1-4969-b8ba-5ff803545131-config-data-custom\") pod \"barbican-worker-6b74d9cbc5-zq8tk\" (UID: \"e706e93d-1fc1-4969-b8ba-5ff803545131\") " pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.174872 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f63f14b8-9c07-4267-aaa0-ceac1d775c2c-config-data\") pod \"barbican-keystone-listener-54895cb446-cqmz8\" (UID: \"f63f14b8-9c07-4267-aaa0-ceac1d775c2c\") " pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.174901 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e706e93d-1fc1-4969-b8ba-5ff803545131-logs\") pod \"barbican-worker-6b74d9cbc5-zq8tk\" (UID: \"e706e93d-1fc1-4969-b8ba-5ff803545131\") " pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.174920 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f63f14b8-9c07-4267-aaa0-ceac1d775c2c-logs\") pod \"barbican-keystone-listener-54895cb446-cqmz8\" (UID: \"f63f14b8-9c07-4267-aaa0-ceac1d775c2c\") " pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.166874 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e706e93d-1fc1-4969-b8ba-5ff803545131-combined-ca-bundle\") pod \"barbican-worker-6b74d9cbc5-zq8tk\" (UID: \"e706e93d-1fc1-4969-b8ba-5ff803545131\") " pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.180224 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e706e93d-1fc1-4969-b8ba-5ff803545131-logs\") pod \"barbican-worker-6b74d9cbc5-zq8tk\" (UID: \"e706e93d-1fc1-4969-b8ba-5ff803545131\") " pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.184968 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f63f14b8-9c07-4267-aaa0-ceac1d775c2c-config-data-custom\") pod \"barbican-keystone-listener-54895cb446-cqmz8\" (UID: \"f63f14b8-9c07-4267-aaa0-ceac1d775c2c\") " pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.185016 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97bcr\" (UniqueName: \"kubernetes.io/projected/e706e93d-1fc1-4969-b8ba-5ff803545131-kube-api-access-97bcr\") pod \"barbican-worker-6b74d9cbc5-zq8tk\" (UID: \"e706e93d-1fc1-4969-b8ba-5ff803545131\") " pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.185035 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63f14b8-9c07-4267-aaa0-ceac1d775c2c-combined-ca-bundle\") pod \"barbican-keystone-listener-54895cb446-cqmz8\" (UID: \"f63f14b8-9c07-4267-aaa0-ceac1d775c2c\") " pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.205566 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e706e93d-1fc1-4969-b8ba-5ff803545131-config-data\") pod \"barbican-worker-6b74d9cbc5-zq8tk\" (UID: \"e706e93d-1fc1-4969-b8ba-5ff803545131\") " pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.209585 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e706e93d-1fc1-4969-b8ba-5ff803545131-config-data-custom\") pod \"barbican-worker-6b74d9cbc5-zq8tk\" (UID: \"e706e93d-1fc1-4969-b8ba-5ff803545131\") " pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.223076 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97bcr\" (UniqueName: \"kubernetes.io/projected/e706e93d-1fc1-4969-b8ba-5ff803545131-kube-api-access-97bcr\") pod \"barbican-worker-6b74d9cbc5-zq8tk\" (UID: \"e706e93d-1fc1-4969-b8ba-5ff803545131\") " pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.246621 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-774d658888-zs7d4"] Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.248387 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.250360 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.286338 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63f14b8-9c07-4267-aaa0-ceac1d775c2c-combined-ca-bundle\") pod \"barbican-keystone-listener-54895cb446-cqmz8\" (UID: \"f63f14b8-9c07-4267-aaa0-ceac1d775c2c\") " pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.286671 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-ovsdbserver-nb\") pod \"dnsmasq-dns-7d548b9b8f-hjpgv\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.286715 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-dns-svc\") pod \"dnsmasq-dns-7d548b9b8f-hjpgv\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.286759 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rxfx\" (UniqueName: \"kubernetes.io/projected/f63f14b8-9c07-4267-aaa0-ceac1d775c2c-kube-api-access-8rxfx\") pod \"barbican-keystone-listener-54895cb446-cqmz8\" (UID: \"f63f14b8-9c07-4267-aaa0-ceac1d775c2c\") " pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.286785 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh9f7\" (UniqueName: \"kubernetes.io/projected/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-kube-api-access-jh9f7\") pod \"dnsmasq-dns-7d548b9b8f-hjpgv\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.286836 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-config\") pod \"dnsmasq-dns-7d548b9b8f-hjpgv\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.286851 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f63f14b8-9c07-4267-aaa0-ceac1d775c2c-config-data\") pod \"barbican-keystone-listener-54895cb446-cqmz8\" (UID: \"f63f14b8-9c07-4267-aaa0-ceac1d775c2c\") " pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.286874 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f63f14b8-9c07-4267-aaa0-ceac1d775c2c-logs\") pod \"barbican-keystone-listener-54895cb446-cqmz8\" (UID: \"f63f14b8-9c07-4267-aaa0-ceac1d775c2c\") " pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.286920 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-ovsdbserver-sb\") pod \"dnsmasq-dns-7d548b9b8f-hjpgv\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.286939 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f63f14b8-9c07-4267-aaa0-ceac1d775c2c-config-data-custom\") pod \"barbican-keystone-listener-54895cb446-cqmz8\" (UID: \"f63f14b8-9c07-4267-aaa0-ceac1d775c2c\") " pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.287705 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f63f14b8-9c07-4267-aaa0-ceac1d775c2c-logs\") pod \"barbican-keystone-listener-54895cb446-cqmz8\" (UID: \"f63f14b8-9c07-4267-aaa0-ceac1d775c2c\") " pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.289902 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f63f14b8-9c07-4267-aaa0-ceac1d775c2c-config-data-custom\") pod \"barbican-keystone-listener-54895cb446-cqmz8\" (UID: \"f63f14b8-9c07-4267-aaa0-ceac1d775c2c\") " pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.292733 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63f14b8-9c07-4267-aaa0-ceac1d775c2c-combined-ca-bundle\") pod \"barbican-keystone-listener-54895cb446-cqmz8\" (UID: \"f63f14b8-9c07-4267-aaa0-ceac1d775c2c\") " pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.294339 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-774d658888-zs7d4"] Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.295320 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f63f14b8-9c07-4267-aaa0-ceac1d775c2c-config-data\") pod \"barbican-keystone-listener-54895cb446-cqmz8\" (UID: \"f63f14b8-9c07-4267-aaa0-ceac1d775c2c\") " pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.303520 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-79d9bc7db7-xzxqf"] Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.304099 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rxfx\" (UniqueName: \"kubernetes.io/projected/f63f14b8-9c07-4267-aaa0-ceac1d775c2c-kube-api-access-8rxfx\") pod \"barbican-keystone-listener-54895cb446-cqmz8\" (UID: \"f63f14b8-9c07-4267-aaa0-ceac1d775c2c\") " pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.304825 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.307443 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.307588 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.307971 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79d9bc7db7-xzxqf"] Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.308675 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.308786 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.308947 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xt29v" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.309129 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.351845 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.388515 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-dns-svc\") pod \"dnsmasq-dns-7d548b9b8f-hjpgv\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.388562 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-config-data\") pod \"barbican-api-774d658888-zs7d4\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.388598 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-public-tls-certs\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.388704 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-internal-tls-certs\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.388725 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-credential-keys\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.388758 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8n9g\" (UniqueName: \"kubernetes.io/projected/e8934111-2c35-4f5a-8b87-182b3fe54fdb-kube-api-access-x8n9g\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.388776 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-fernet-keys\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.388810 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh9f7\" (UniqueName: \"kubernetes.io/projected/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-kube-api-access-jh9f7\") pod \"dnsmasq-dns-7d548b9b8f-hjpgv\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.388856 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-config-data-custom\") pod \"barbican-api-774d658888-zs7d4\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.388879 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvb8w\" (UniqueName: \"kubernetes.io/projected/cddd01e7-479d-4917-a3e7-914cf051fcd0-kube-api-access-hvb8w\") pod \"barbican-api-774d658888-zs7d4\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.388929 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-config\") pod \"dnsmasq-dns-7d548b9b8f-hjpgv\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.388965 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-scripts\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.389030 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-ovsdbserver-sb\") pod \"dnsmasq-dns-7d548b9b8f-hjpgv\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.389079 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-combined-ca-bundle\") pod \"barbican-api-774d658888-zs7d4\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.389133 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-config-data\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.389185 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-ovsdbserver-nb\") pod \"dnsmasq-dns-7d548b9b8f-hjpgv\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.389214 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-combined-ca-bundle\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.389264 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cddd01e7-479d-4917-a3e7-914cf051fcd0-logs\") pod \"barbican-api-774d658888-zs7d4\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.390652 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-dns-svc\") pod \"dnsmasq-dns-7d548b9b8f-hjpgv\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.391405 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-ovsdbserver-sb\") pod \"dnsmasq-dns-7d548b9b8f-hjpgv\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.391943 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-ovsdbserver-nb\") pod \"dnsmasq-dns-7d548b9b8f-hjpgv\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.392477 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-config\") pod \"dnsmasq-dns-7d548b9b8f-hjpgv\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.411284 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh9f7\" (UniqueName: \"kubernetes.io/projected/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-kube-api-access-jh9f7\") pod \"dnsmasq-dns-7d548b9b8f-hjpgv\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.490972 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-config-data-custom\") pod \"barbican-api-774d658888-zs7d4\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.491036 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvb8w\" (UniqueName: \"kubernetes.io/projected/cddd01e7-479d-4917-a3e7-914cf051fcd0-kube-api-access-hvb8w\") pod \"barbican-api-774d658888-zs7d4\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.491067 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-scripts\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.491138 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-combined-ca-bundle\") pod \"barbican-api-774d658888-zs7d4\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.491190 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-config-data\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.491212 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-combined-ca-bundle\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.491231 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cddd01e7-479d-4917-a3e7-914cf051fcd0-logs\") pod \"barbican-api-774d658888-zs7d4\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.491285 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-config-data\") pod \"barbican-api-774d658888-zs7d4\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.491300 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-public-tls-certs\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.491338 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-internal-tls-certs\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.491358 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-credential-keys\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.491373 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8n9g\" (UniqueName: \"kubernetes.io/projected/e8934111-2c35-4f5a-8b87-182b3fe54fdb-kube-api-access-x8n9g\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.491409 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-fernet-keys\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.496623 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-scripts\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.496638 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cddd01e7-479d-4917-a3e7-914cf051fcd0-logs\") pod \"barbican-api-774d658888-zs7d4\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.496833 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-internal-tls-certs\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.497179 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-combined-ca-bundle\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.497394 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-fernet-keys\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.503062 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-config-data-custom\") pod \"barbican-api-774d658888-zs7d4\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.503959 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-config-data\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.504223 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-combined-ca-bundle\") pod \"barbican-api-774d658888-zs7d4\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.506213 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-public-tls-certs\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.506264 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e8934111-2c35-4f5a-8b87-182b3fe54fdb-credential-keys\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.506517 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-config-data\") pod \"barbican-api-774d658888-zs7d4\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.511075 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvb8w\" (UniqueName: \"kubernetes.io/projected/cddd01e7-479d-4917-a3e7-914cf051fcd0-kube-api-access-hvb8w\") pod \"barbican-api-774d658888-zs7d4\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.518212 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8n9g\" (UniqueName: \"kubernetes.io/projected/e8934111-2c35-4f5a-8b87-182b3fe54fdb-kube-api-access-x8n9g\") pod \"keystone-79d9bc7db7-xzxqf\" (UID: \"e8934111-2c35-4f5a-8b87-182b3fe54fdb\") " pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.551079 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.571980 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.580324 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.622897 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.727203 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78c76fd9c4-8nvkz"] Nov 25 07:34:02 crc kubenswrapper[5043]: I1125 07:34:02.861010 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6b74d9cbc5-zq8tk"] Nov 25 07:34:02 crc kubenswrapper[5043]: W1125 07:34:02.909350 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode706e93d_1fc1_4969_b8ba_5ff803545131.slice/crio-3956a125578fc249b79973c95bed8a361fe73b7b1427790f309fe173f4bd8ed7 WatchSource:0}: Error finding container 3956a125578fc249b79973c95bed8a361fe73b7b1427790f309fe173f4bd8ed7: Status 404 returned error can't find the container with id 3956a125578fc249b79973c95bed8a361fe73b7b1427790f309fe173f4bd8ed7 Nov 25 07:34:03 crc kubenswrapper[5043]: I1125 07:34:03.065166 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54895cb446-cqmz8"] Nov 25 07:34:03 crc kubenswrapper[5043]: W1125 07:34:03.091111 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf63f14b8_9c07_4267_aaa0_ceac1d775c2c.slice/crio-e8a6efab3ecb4f289a1c48bb4578ffa2fe05e8feb070877d5bb14b050e4a4c48 WatchSource:0}: Error finding container e8a6efab3ecb4f289a1c48bb4578ffa2fe05e8feb070877d5bb14b050e4a4c48: Status 404 returned error can't find the container with id e8a6efab3ecb4f289a1c48bb4578ffa2fe05e8feb070877d5bb14b050e4a4c48 Nov 25 07:34:03 crc kubenswrapper[5043]: I1125 07:34:03.251051 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79d9bc7db7-xzxqf"] Nov 25 07:34:03 crc kubenswrapper[5043]: I1125 07:34:03.258947 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d548b9b8f-hjpgv"] Nov 25 07:34:03 crc kubenswrapper[5043]: W1125 07:34:03.264487 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8934111_2c35_4f5a_8b87_182b3fe54fdb.slice/crio-7d003dd4b42eabefc5ab390d525b5191af97dd52d7a54a238055be2436cdd83f WatchSource:0}: Error finding container 7d003dd4b42eabefc5ab390d525b5191af97dd52d7a54a238055be2436cdd83f: Status 404 returned error can't find the container with id 7d003dd4b42eabefc5ab390d525b5191af97dd52d7a54a238055be2436cdd83f Nov 25 07:34:03 crc kubenswrapper[5043]: W1125 07:34:03.265044 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd1a4fec_6d8d_4ff8_a015_eaf438d76965.slice/crio-80c1198b993d6f9730d793162f9219cae6eae1227489609328a9e66207e6d789 WatchSource:0}: Error finding container 80c1198b993d6f9730d793162f9219cae6eae1227489609328a9e66207e6d789: Status 404 returned error can't find the container with id 80c1198b993d6f9730d793162f9219cae6eae1227489609328a9e66207e6d789 Nov 25 07:34:03 crc kubenswrapper[5043]: I1125 07:34:03.283859 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-774d658888-zs7d4"] Nov 25 07:34:03 crc kubenswrapper[5043]: I1125 07:34:03.770338 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78c76fd9c4-8nvkz" event={"ID":"8ae8142b-9631-41b8-94ea-cad294cf0fbf","Type":"ContainerStarted","Data":"e811d6245d9eb731832562d20670cefb2e198850494b2326d90fce4556556a63"} Nov 25 07:34:03 crc kubenswrapper[5043]: I1125 07:34:03.770776 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78c76fd9c4-8nvkz" event={"ID":"8ae8142b-9631-41b8-94ea-cad294cf0fbf","Type":"ContainerStarted","Data":"8967c2f82c2d7775b2acc03d6009f74010bdef0d338f2c38024a5aa88200c693"} Nov 25 07:34:03 crc kubenswrapper[5043]: I1125 07:34:03.774387 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774d658888-zs7d4" event={"ID":"cddd01e7-479d-4917-a3e7-914cf051fcd0","Type":"ContainerStarted","Data":"ff50d4f1bc85d719c316e3c8400b27faa4ca00827b870219e2e5527d73b548ec"} Nov 25 07:34:03 crc kubenswrapper[5043]: I1125 07:34:03.776264 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79d9bc7db7-xzxqf" event={"ID":"e8934111-2c35-4f5a-8b87-182b3fe54fdb","Type":"ContainerStarted","Data":"7d003dd4b42eabefc5ab390d525b5191af97dd52d7a54a238055be2436cdd83f"} Nov 25 07:34:03 crc kubenswrapper[5043]: I1125 07:34:03.777467 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" event={"ID":"f63f14b8-9c07-4267-aaa0-ceac1d775c2c","Type":"ContainerStarted","Data":"e8a6efab3ecb4f289a1c48bb4578ffa2fe05e8feb070877d5bb14b050e4a4c48"} Nov 25 07:34:03 crc kubenswrapper[5043]: I1125 07:34:03.778655 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" event={"ID":"e706e93d-1fc1-4969-b8ba-5ff803545131","Type":"ContainerStarted","Data":"3956a125578fc249b79973c95bed8a361fe73b7b1427790f309fe173f4bd8ed7"} Nov 25 07:34:03 crc kubenswrapper[5043]: I1125 07:34:03.779469 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" event={"ID":"bd1a4fec-6d8d-4ff8-a015-eaf438d76965","Type":"ContainerStarted","Data":"80c1198b993d6f9730d793162f9219cae6eae1227489609328a9e66207e6d789"} Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.614555 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7d74b989db-9zq82"] Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.616292 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.618823 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.625701 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d74b989db-9zq82"] Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.634750 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.738472 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-combined-ca-bundle\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.738564 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-config-data\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.738597 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-public-tls-certs\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.738681 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-config-data-custom\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.738723 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-internal-tls-certs\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.738772 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-logs\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.738853 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfzzv\" (UniqueName: \"kubernetes.io/projected/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-kube-api-access-pfzzv\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.793402 5043 generic.go:334] "Generic (PLEG): container finished" podID="bd1a4fec-6d8d-4ff8-a015-eaf438d76965" containerID="d4dacfd9a229030ea71710a5e7f3d92776ec10a07d9c5e4f30afaf694d407647" exitCode=0 Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.793457 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" event={"ID":"bd1a4fec-6d8d-4ff8-a015-eaf438d76965","Type":"ContainerDied","Data":"d4dacfd9a229030ea71710a5e7f3d92776ec10a07d9c5e4f30afaf694d407647"} Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.821970 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78c76fd9c4-8nvkz" event={"ID":"8ae8142b-9631-41b8-94ea-cad294cf0fbf","Type":"ContainerStarted","Data":"3ba469a11a87460356a9a297fb67d6663b4a8d27730f0ec7872881d7d7703eae"} Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.822981 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.823006 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.841636 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-internal-tls-certs\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.841735 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-logs\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.841819 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfzzv\" (UniqueName: \"kubernetes.io/projected/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-kube-api-access-pfzzv\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.841849 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-combined-ca-bundle\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.841928 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-config-data\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.841965 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-public-tls-certs\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.841998 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-config-data-custom\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.848940 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774d658888-zs7d4" event={"ID":"cddd01e7-479d-4917-a3e7-914cf051fcd0","Type":"ContainerStarted","Data":"b9897c40811d196acce9fa7c4386b05529caafdbe5476a14f496848e499ddc6b"} Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.849020 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774d658888-zs7d4" event={"ID":"cddd01e7-479d-4917-a3e7-914cf051fcd0","Type":"ContainerStarted","Data":"9e4805cb391773500632e38c9d12656e8fa86efb50cd12e6d14ee1cf90011b1f"} Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.851333 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-internal-tls-certs\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.851768 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-logs\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.851825 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.853656 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.859235 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-config-data-custom\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.860253 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-config-data\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.862671 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79d9bc7db7-xzxqf" event={"ID":"e8934111-2c35-4f5a-8b87-182b3fe54fdb","Type":"ContainerStarted","Data":"6745fd3ae7be342d9da80c0c407d8617168e65e7c6b6125e713f4028709f1a22"} Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.863404 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.868180 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-combined-ca-bundle\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.869058 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-public-tls-certs\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.871413 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfzzv\" (UniqueName: \"kubernetes.io/projected/6c6eae10-5480-4b54-8cf5-1fd717d00c0e-kube-api-access-pfzzv\") pod \"barbican-api-7d74b989db-9zq82\" (UID: \"6c6eae10-5480-4b54-8cf5-1fd717d00c0e\") " pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.871874 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-78c76fd9c4-8nvkz" podStartSLOduration=3.871863525 podStartE2EDuration="3.871863525s" podCreationTimestamp="2025-11-25 07:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:34:04.849115525 +0000 UTC m=+1109.017311266" watchObservedRunningTime="2025-11-25 07:34:04.871863525 +0000 UTC m=+1109.040059246" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.888925 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-774d658888-zs7d4" podStartSLOduration=2.8889090619999998 podStartE2EDuration="2.888909062s" podCreationTimestamp="2025-11-25 07:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:34:04.888245935 +0000 UTC m=+1109.056441656" watchObservedRunningTime="2025-11-25 07:34:04.888909062 +0000 UTC m=+1109.057104783" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.933953 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-79d9bc7db7-xzxqf" podStartSLOduration=2.933933169 podStartE2EDuration="2.933933169s" podCreationTimestamp="2025-11-25 07:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:34:04.908445166 +0000 UTC m=+1109.076640897" watchObservedRunningTime="2025-11-25 07:34:04.933933169 +0000 UTC m=+1109.102128890" Nov 25 07:34:04 crc kubenswrapper[5043]: I1125 07:34:04.936998 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:05 crc kubenswrapper[5043]: I1125 07:34:05.882070 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" event={"ID":"bd1a4fec-6d8d-4ff8-a015-eaf438d76965","Type":"ContainerStarted","Data":"0fc86210600a5ee4c5aa24a4c1a56c2c65771be667f47e6abcf32f6dc5b17553"} Nov 25 07:34:05 crc kubenswrapper[5043]: I1125 07:34:05.884539 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:05 crc kubenswrapper[5043]: I1125 07:34:05.885029 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:34:05 crc kubenswrapper[5043]: I1125 07:34:05.886827 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:34:05 crc kubenswrapper[5043]: I1125 07:34:05.888379 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-b5bc7cfb-2sfts" podUID="9955ab7e-1d74-461a-a9b2-73e9f82d48fe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Nov 25 07:34:05 crc kubenswrapper[5043]: I1125 07:34:05.915970 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" podStartSLOduration=3.915948604 podStartE2EDuration="3.915948604s" podCreationTimestamp="2025-11-25 07:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:34:05.908920666 +0000 UTC m=+1110.077116387" watchObservedRunningTime="2025-11-25 07:34:05.915948604 +0000 UTC m=+1110.084144325" Nov 25 07:34:06 crc kubenswrapper[5043]: I1125 07:34:06.003057 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:34:06 crc kubenswrapper[5043]: I1125 07:34:06.003115 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:34:06 crc kubenswrapper[5043]: I1125 07:34:06.004507 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5f67c4b5d4-f96jj" podUID="13e8a8ee-bfe8-415b-b76f-89d7d7296659" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Nov 25 07:34:06 crc kubenswrapper[5043]: I1125 07:34:06.022344 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d74b989db-9zq82"] Nov 25 07:34:06 crc kubenswrapper[5043]: W1125 07:34:06.201447 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c6eae10_5480_4b54_8cf5_1fd717d00c0e.slice/crio-f83ebb87838714cd100c39ac272de06e9b99cb3a522b81ded2fe5609ac4f8675 WatchSource:0}: Error finding container f83ebb87838714cd100c39ac272de06e9b99cb3a522b81ded2fe5609ac4f8675: Status 404 returned error can't find the container with id f83ebb87838714cd100c39ac272de06e9b99cb3a522b81ded2fe5609ac4f8675 Nov 25 07:34:06 crc kubenswrapper[5043]: I1125 07:34:06.912123 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d74b989db-9zq82" event={"ID":"6c6eae10-5480-4b54-8cf5-1fd717d00c0e","Type":"ContainerStarted","Data":"94c8feef939d3109806d2429a03f4d57374171c12dd6fe9eb60ac160f8ac4593"} Nov 25 07:34:06 crc kubenswrapper[5043]: I1125 07:34:06.912926 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d74b989db-9zq82" event={"ID":"6c6eae10-5480-4b54-8cf5-1fd717d00c0e","Type":"ContainerStarted","Data":"2168ba474b60ce9b2704b99d949adea0280b3a8fe80a6027d1dda81c39e065e1"} Nov 25 07:34:06 crc kubenswrapper[5043]: I1125 07:34:06.912945 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d74b989db-9zq82" event={"ID":"6c6eae10-5480-4b54-8cf5-1fd717d00c0e","Type":"ContainerStarted","Data":"f83ebb87838714cd100c39ac272de06e9b99cb3a522b81ded2fe5609ac4f8675"} Nov 25 07:34:06 crc kubenswrapper[5043]: I1125 07:34:06.913005 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:06 crc kubenswrapper[5043]: I1125 07:34:06.913026 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:06 crc kubenswrapper[5043]: I1125 07:34:06.919811 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" event={"ID":"f63f14b8-9c07-4267-aaa0-ceac1d775c2c","Type":"ContainerStarted","Data":"facd9d005ef4cbd624fa52eaf15c308e764fe8de51d2cbab80acc3c43e3d7b58"} Nov 25 07:34:06 crc kubenswrapper[5043]: I1125 07:34:06.919842 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" event={"ID":"f63f14b8-9c07-4267-aaa0-ceac1d775c2c","Type":"ContainerStarted","Data":"f0abf9143abd9ec97d8a88baf26d67ce6e971369cb1e4c4b547646fc9f7a21f9"} Nov 25 07:34:06 crc kubenswrapper[5043]: I1125 07:34:06.930814 5043 generic.go:334] "Generic (PLEG): container finished" podID="0360da29-fc4a-44ea-9d0e-e446d69037bc" containerID="49a8893f812fa5273ec4b7368978d5da18eacfb331f5a18927d9d709a7ebc952" exitCode=0 Nov 25 07:34:06 crc kubenswrapper[5043]: I1125 07:34:06.930892 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tnm6z" event={"ID":"0360da29-fc4a-44ea-9d0e-e446d69037bc","Type":"ContainerDied","Data":"49a8893f812fa5273ec4b7368978d5da18eacfb331f5a18927d9d709a7ebc952"} Nov 25 07:34:06 crc kubenswrapper[5043]: I1125 07:34:06.937666 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7d74b989db-9zq82" podStartSLOduration=2.936904753 podStartE2EDuration="2.936904753s" podCreationTimestamp="2025-11-25 07:34:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:34:06.935096365 +0000 UTC m=+1111.103292086" watchObservedRunningTime="2025-11-25 07:34:06.936904753 +0000 UTC m=+1111.105100484" Nov 25 07:34:06 crc kubenswrapper[5043]: I1125 07:34:06.969191 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" event={"ID":"e706e93d-1fc1-4969-b8ba-5ff803545131","Type":"ContainerStarted","Data":"56a9c13e1cfb0c3248ba9cf22244e21756e5f14131cb36ff33c7ca7e2ee31169"} Nov 25 07:34:06 crc kubenswrapper[5043]: I1125 07:34:06.969300 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" event={"ID":"e706e93d-1fc1-4969-b8ba-5ff803545131","Type":"ContainerStarted","Data":"80b91af3a9e7014077c0d2f9d3c979b7f3cec46aa6072fe8ce2e3c46542c5618"} Nov 25 07:34:07 crc kubenswrapper[5043]: I1125 07:34:07.007731 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-54895cb446-cqmz8" podStartSLOduration=2.844819923 podStartE2EDuration="6.007702461s" podCreationTimestamp="2025-11-25 07:34:01 +0000 UTC" firstStartedPulling="2025-11-25 07:34:03.095115029 +0000 UTC m=+1107.263310760" lastFinishedPulling="2025-11-25 07:34:06.257997577 +0000 UTC m=+1110.426193298" observedRunningTime="2025-11-25 07:34:06.974940833 +0000 UTC m=+1111.143136554" watchObservedRunningTime="2025-11-25 07:34:07.007702461 +0000 UTC m=+1111.175898182" Nov 25 07:34:07 crc kubenswrapper[5043]: I1125 07:34:07.036337 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6b74d9cbc5-zq8tk" podStartSLOduration=2.69669462 podStartE2EDuration="6.036310549s" podCreationTimestamp="2025-11-25 07:34:01 +0000 UTC" firstStartedPulling="2025-11-25 07:34:02.912397138 +0000 UTC m=+1107.080592859" lastFinishedPulling="2025-11-25 07:34:06.252013027 +0000 UTC m=+1110.420208788" observedRunningTime="2025-11-25 07:34:06.998389642 +0000 UTC m=+1111.166585363" watchObservedRunningTime="2025-11-25 07:34:07.036310549 +0000 UTC m=+1111.204506270" Nov 25 07:34:10 crc kubenswrapper[5043]: I1125 07:34:10.519448 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tnm6z" Nov 25 07:34:10 crc kubenswrapper[5043]: I1125 07:34:10.668866 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0360da29-fc4a-44ea-9d0e-e446d69037bc-config\") pod \"0360da29-fc4a-44ea-9d0e-e446d69037bc\" (UID: \"0360da29-fc4a-44ea-9d0e-e446d69037bc\") " Nov 25 07:34:10 crc kubenswrapper[5043]: I1125 07:34:10.669020 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv7sx\" (UniqueName: \"kubernetes.io/projected/0360da29-fc4a-44ea-9d0e-e446d69037bc-kube-api-access-dv7sx\") pod \"0360da29-fc4a-44ea-9d0e-e446d69037bc\" (UID: \"0360da29-fc4a-44ea-9d0e-e446d69037bc\") " Nov 25 07:34:10 crc kubenswrapper[5043]: I1125 07:34:10.669164 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0360da29-fc4a-44ea-9d0e-e446d69037bc-combined-ca-bundle\") pod \"0360da29-fc4a-44ea-9d0e-e446d69037bc\" (UID: \"0360da29-fc4a-44ea-9d0e-e446d69037bc\") " Nov 25 07:34:10 crc kubenswrapper[5043]: I1125 07:34:10.688218 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0360da29-fc4a-44ea-9d0e-e446d69037bc-kube-api-access-dv7sx" (OuterVolumeSpecName: "kube-api-access-dv7sx") pod "0360da29-fc4a-44ea-9d0e-e446d69037bc" (UID: "0360da29-fc4a-44ea-9d0e-e446d69037bc"). InnerVolumeSpecName "kube-api-access-dv7sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:10 crc kubenswrapper[5043]: I1125 07:34:10.702112 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0360da29-fc4a-44ea-9d0e-e446d69037bc-config" (OuterVolumeSpecName: "config") pod "0360da29-fc4a-44ea-9d0e-e446d69037bc" (UID: "0360da29-fc4a-44ea-9d0e-e446d69037bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:10 crc kubenswrapper[5043]: I1125 07:34:10.710017 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0360da29-fc4a-44ea-9d0e-e446d69037bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0360da29-fc4a-44ea-9d0e-e446d69037bc" (UID: "0360da29-fc4a-44ea-9d0e-e446d69037bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:10 crc kubenswrapper[5043]: I1125 07:34:10.770827 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0360da29-fc4a-44ea-9d0e-e446d69037bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:10 crc kubenswrapper[5043]: I1125 07:34:10.770864 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0360da29-fc4a-44ea-9d0e-e446d69037bc-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:10 crc kubenswrapper[5043]: I1125 07:34:10.770874 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv7sx\" (UniqueName: \"kubernetes.io/projected/0360da29-fc4a-44ea-9d0e-e446d69037bc-kube-api-access-dv7sx\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.014365 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tnm6z" event={"ID":"0360da29-fc4a-44ea-9d0e-e446d69037bc","Type":"ContainerDied","Data":"a1aa78d4b891551a71753a3471a9d9c18e3716d5bd69fbf69f4dfc402fff4761"} Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.014411 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1aa78d4b891551a71753a3471a9d9c18e3716d5bd69fbf69f4dfc402fff4761" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.014459 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tnm6z" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.731392 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d548b9b8f-hjpgv"] Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.731573 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" podUID="bd1a4fec-6d8d-4ff8-a015-eaf438d76965" containerName="dnsmasq-dns" containerID="cri-o://0fc86210600a5ee4c5aa24a4c1a56c2c65771be667f47e6abcf32f6dc5b17553" gracePeriod=10 Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.733785 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.780101 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7776d59f89-pzprs"] Nov 25 07:34:11 crc kubenswrapper[5043]: E1125 07:34:11.780461 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0360da29-fc4a-44ea-9d0e-e446d69037bc" containerName="neutron-db-sync" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.780477 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="0360da29-fc4a-44ea-9d0e-e446d69037bc" containerName="neutron-db-sync" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.780703 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="0360da29-fc4a-44ea-9d0e-e446d69037bc" containerName="neutron-db-sync" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.781757 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.798872 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7776d59f89-pzprs"] Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.890840 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5985cc949b-rw6ms"] Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.892873 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.896865 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.897051 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.897098 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5htmq" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.897263 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.899347 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-config\") pod \"dnsmasq-dns-7776d59f89-pzprs\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.899399 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-ovsdbserver-sb\") pod \"dnsmasq-dns-7776d59f89-pzprs\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.899433 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-ovsdbserver-nb\") pod \"dnsmasq-dns-7776d59f89-pzprs\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.899506 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkpt2\" (UniqueName: \"kubernetes.io/projected/9d44a94a-e868-48c4-91ce-58b2290badc9-kube-api-access-jkpt2\") pod \"dnsmasq-dns-7776d59f89-pzprs\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.899529 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-dns-svc\") pod \"dnsmasq-dns-7776d59f89-pzprs\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:11 crc kubenswrapper[5043]: I1125 07:34:11.904693 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5985cc949b-rw6ms"] Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.001012 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-ovsdbserver-sb\") pod \"dnsmasq-dns-7776d59f89-pzprs\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.001102 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-ovndb-tls-certs\") pod \"neutron-5985cc949b-rw6ms\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.001161 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-ovsdbserver-nb\") pod \"dnsmasq-dns-7776d59f89-pzprs\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.001198 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-config\") pod \"neutron-5985cc949b-rw6ms\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.001252 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-httpd-config\") pod \"neutron-5985cc949b-rw6ms\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.001310 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p42v5\" (UniqueName: \"kubernetes.io/projected/292b02ee-fd80-4582-a1fa-ef9aa27c941c-kube-api-access-p42v5\") pod \"neutron-5985cc949b-rw6ms\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.001362 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkpt2\" (UniqueName: \"kubernetes.io/projected/9d44a94a-e868-48c4-91ce-58b2290badc9-kube-api-access-jkpt2\") pod \"dnsmasq-dns-7776d59f89-pzprs\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.001384 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-dns-svc\") pod \"dnsmasq-dns-7776d59f89-pzprs\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.001476 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-combined-ca-bundle\") pod \"neutron-5985cc949b-rw6ms\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.001522 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-config\") pod \"dnsmasq-dns-7776d59f89-pzprs\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.002680 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-config\") pod \"dnsmasq-dns-7776d59f89-pzprs\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.003462 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-ovsdbserver-sb\") pod \"dnsmasq-dns-7776d59f89-pzprs\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.004158 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-ovsdbserver-nb\") pod \"dnsmasq-dns-7776d59f89-pzprs\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.005833 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-dns-svc\") pod \"dnsmasq-dns-7776d59f89-pzprs\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.029376 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkpt2\" (UniqueName: \"kubernetes.io/projected/9d44a94a-e868-48c4-91ce-58b2290badc9-kube-api-access-jkpt2\") pod \"dnsmasq-dns-7776d59f89-pzprs\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.034412 5043 generic.go:334] "Generic (PLEG): container finished" podID="bd1a4fec-6d8d-4ff8-a015-eaf438d76965" containerID="0fc86210600a5ee4c5aa24a4c1a56c2c65771be667f47e6abcf32f6dc5b17553" exitCode=0 Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.034456 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" event={"ID":"bd1a4fec-6d8d-4ff8-a015-eaf438d76965","Type":"ContainerDied","Data":"0fc86210600a5ee4c5aa24a4c1a56c2c65771be667f47e6abcf32f6dc5b17553"} Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.103310 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p42v5\" (UniqueName: \"kubernetes.io/projected/292b02ee-fd80-4582-a1fa-ef9aa27c941c-kube-api-access-p42v5\") pod \"neutron-5985cc949b-rw6ms\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.103481 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-combined-ca-bundle\") pod \"neutron-5985cc949b-rw6ms\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.103554 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-ovndb-tls-certs\") pod \"neutron-5985cc949b-rw6ms\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.103584 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-config\") pod \"neutron-5985cc949b-rw6ms\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.103614 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-httpd-config\") pod \"neutron-5985cc949b-rw6ms\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.109463 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-ovndb-tls-certs\") pod \"neutron-5985cc949b-rw6ms\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.112806 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-config\") pod \"neutron-5985cc949b-rw6ms\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.126925 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p42v5\" (UniqueName: \"kubernetes.io/projected/292b02ee-fd80-4582-a1fa-ef9aa27c941c-kube-api-access-p42v5\") pod \"neutron-5985cc949b-rw6ms\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.128376 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-combined-ca-bundle\") pod \"neutron-5985cc949b-rw6ms\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.132032 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.132613 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-httpd-config\") pod \"neutron-5985cc949b-rw6ms\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.250320 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:12 crc kubenswrapper[5043]: I1125 07:34:12.573183 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" podUID="bd1a4fec-6d8d-4ff8-a015-eaf438d76965" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: connect: connection refused" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.188945 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.208009 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-586d64c99c-q5jk2"] Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.209386 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: W1125 07:34:14.211422 5043 reflector.go:561] object-"openstack"/"cert-neutron-internal-svc": failed to list *v1.Secret: secrets "cert-neutron-internal-svc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Nov 25 07:34:14 crc kubenswrapper[5043]: E1125 07:34:14.211466 5043 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-neutron-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-neutron-internal-svc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 07:34:14 crc kubenswrapper[5043]: W1125 07:34:14.211628 5043 reflector.go:561] object-"openstack"/"cert-neutron-public-svc": failed to list *v1.Secret: secrets "cert-neutron-public-svc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Nov 25 07:34:14 crc kubenswrapper[5043]: E1125 07:34:14.211648 5043 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-neutron-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-neutron-public-svc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.239855 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-586d64c99c-q5jk2"] Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.354695 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-httpd-config\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.354811 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcdch\" (UniqueName: \"kubernetes.io/projected/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-kube-api-access-vcdch\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.354856 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-combined-ca-bundle\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.354930 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-internal-tls-certs\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.354952 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-public-tls-certs\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.354971 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-config\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.355016 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-ovndb-tls-certs\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.430358 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.455968 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcdch\" (UniqueName: \"kubernetes.io/projected/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-kube-api-access-vcdch\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.456017 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-combined-ca-bundle\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.456066 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-internal-tls-certs\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.456085 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-public-tls-certs\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.456107 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-config\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.456131 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-ovndb-tls-certs\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.456172 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-httpd-config\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.486292 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-combined-ca-bundle\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.494336 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcdch\" (UniqueName: \"kubernetes.io/projected/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-kube-api-access-vcdch\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.495378 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-ovndb-tls-certs\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.518033 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-httpd-config\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:14 crc kubenswrapper[5043]: I1125 07:34:14.518084 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-config\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.066685 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.072994 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-public-tls-certs\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.118946 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" event={"ID":"bd1a4fec-6d8d-4ff8-a015-eaf438d76965","Type":"ContainerDied","Data":"80c1198b993d6f9730d793162f9219cae6eae1227489609328a9e66207e6d789"} Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.118997 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80c1198b993d6f9730d793162f9219cae6eae1227489609328a9e66207e6d789" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.135917 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.215653 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.225804 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7776d59f89-pzprs"] Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.230759 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e087724-2bb8-47c4-9687-cd1e82fb5a1f-internal-tls-certs\") pod \"neutron-586d64c99c-q5jk2\" (UID: \"6e087724-2bb8-47c4-9687-cd1e82fb5a1f\") " pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.295136 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-ovsdbserver-sb\") pod \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.295578 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-ovsdbserver-nb\") pod \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.295697 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh9f7\" (UniqueName: \"kubernetes.io/projected/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-kube-api-access-jh9f7\") pod \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.295748 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-config\") pod \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.295860 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-dns-svc\") pod \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\" (UID: \"bd1a4fec-6d8d-4ff8-a015-eaf438d76965\") " Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.385030 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-kube-api-access-jh9f7" (OuterVolumeSpecName: "kube-api-access-jh9f7") pod "bd1a4fec-6d8d-4ff8-a015-eaf438d76965" (UID: "bd1a4fec-6d8d-4ff8-a015-eaf438d76965"). InnerVolumeSpecName "kube-api-access-jh9f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.399403 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh9f7\" (UniqueName: \"kubernetes.io/projected/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-kube-api-access-jh9f7\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.432800 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.603077 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd1a4fec-6d8d-4ff8-a015-eaf438d76965" (UID: "bd1a4fec-6d8d-4ff8-a015-eaf438d76965"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.604437 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.607091 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd1a4fec-6d8d-4ff8-a015-eaf438d76965" (UID: "bd1a4fec-6d8d-4ff8-a015-eaf438d76965"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.611825 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-config" (OuterVolumeSpecName: "config") pod "bd1a4fec-6d8d-4ff8-a015-eaf438d76965" (UID: "bd1a4fec-6d8d-4ff8-a015-eaf438d76965"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.626665 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd1a4fec-6d8d-4ff8-a015-eaf438d76965" (UID: "bd1a4fec-6d8d-4ff8-a015-eaf438d76965"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.707473 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.707714 5043 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.707724 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd1a4fec-6d8d-4ff8-a015-eaf438d76965-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:15 crc kubenswrapper[5043]: I1125 07:34:15.904674 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-b5bc7cfb-2sfts" podUID="9955ab7e-1d74-461a-a9b2-73e9f82d48fe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.006248 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5f67c4b5d4-f96jj" podUID="13e8a8ee-bfe8-415b-b76f-89d7d7296659" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.149977 5043 generic.go:334] "Generic (PLEG): container finished" podID="9d44a94a-e868-48c4-91ce-58b2290badc9" containerID="22b0efd3b2128f3d16ab0acaa995f9745adaea0ed4ffb55b75e53f4233dd72fe" exitCode=0 Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.150056 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7776d59f89-pzprs" event={"ID":"9d44a94a-e868-48c4-91ce-58b2290badc9","Type":"ContainerDied","Data":"22b0efd3b2128f3d16ab0acaa995f9745adaea0ed4ffb55b75e53f4233dd72fe"} Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.150334 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7776d59f89-pzprs" event={"ID":"9d44a94a-e868-48c4-91ce-58b2290badc9","Type":"ContainerStarted","Data":"12c50f78a5a95888fa48ebc6285d7d58917ee63da488d54b4902bcd64467ecef"} Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.181073 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d548b9b8f-hjpgv" Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.181327 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerName="ceilometer-central-agent" containerID="cri-o://454fdb2f0f73c6597e7069e12b806804c5e1271ca9ad8bd486816d448f18de78" gracePeriod=30 Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.181363 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerName="ceilometer-notification-agent" containerID="cri-o://5835f75e81ebc5781555280ba2b8c5a0b9c0b4cda0beaeced5e4309980a2fe70" gracePeriod=30 Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.181366 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerName="proxy-httpd" containerID="cri-o://5fd33eaef3475f820343a971afd2a4ed7c6f63d82ac0506cfd71df353ea58c67" gracePeriod=30 Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.181335 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerName="sg-core" containerID="cri-o://ca615206e43a29a00ef8cce621d97a07234623d0a1504cd9aa6d6d6c99e8c563" gracePeriod=30 Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.181508 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9","Type":"ContainerStarted","Data":"5fd33eaef3475f820343a971afd2a4ed7c6f63d82ac0506cfd71df353ea58c67"} Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.181893 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.187576 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-586d64c99c-q5jk2"] Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.234654 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.970709822 podStartE2EDuration="1m0.234633777s" podCreationTimestamp="2025-11-25 07:33:16 +0000 UTC" firstStartedPulling="2025-11-25 07:33:17.695980729 +0000 UTC m=+1061.864176450" lastFinishedPulling="2025-11-25 07:34:14.959904684 +0000 UTC m=+1119.128100405" observedRunningTime="2025-11-25 07:34:16.217844757 +0000 UTC m=+1120.386040498" watchObservedRunningTime="2025-11-25 07:34:16.234633777 +0000 UTC m=+1120.402829498" Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.420435 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d548b9b8f-hjpgv"] Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.435439 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d548b9b8f-hjpgv"] Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.755046 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5985cc949b-rw6ms"] Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.918432 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:16 crc kubenswrapper[5043]: I1125 07:34:16.979572 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd1a4fec-6d8d-4ff8-a015-eaf438d76965" path="/var/lib/kubelet/pods/bd1a4fec-6d8d-4ff8-a015-eaf438d76965/volumes" Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.081131 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d74b989db-9zq82" Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.141083 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-774d658888-zs7d4"] Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.141571 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-774d658888-zs7d4" podUID="cddd01e7-479d-4917-a3e7-914cf051fcd0" containerName="barbican-api-log" containerID="cri-o://9e4805cb391773500632e38c9d12656e8fa86efb50cd12e6d14ee1cf90011b1f" gracePeriod=30 Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.142102 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-774d658888-zs7d4" podUID="cddd01e7-479d-4917-a3e7-914cf051fcd0" containerName="barbican-api" containerID="cri-o://b9897c40811d196acce9fa7c4386b05529caafdbe5476a14f496848e499ddc6b" gracePeriod=30 Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.165868 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-774d658888-zs7d4" podUID="cddd01e7-479d-4917-a3e7-914cf051fcd0" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.149:9311/healthcheck\": EOF" Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.207293 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jzr78" event={"ID":"2de64291-b46f-4ba3-bdec-a3bad5873881","Type":"ContainerStarted","Data":"75f2592db3c5a8441facc75e0de58769459996f5d7f711900a2a1957128adaa9"} Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.209185 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5985cc949b-rw6ms" event={"ID":"292b02ee-fd80-4582-a1fa-ef9aa27c941c","Type":"ContainerStarted","Data":"8d4e5c2f293d4882994ed680c131bacb731037b791a43f350d297b37567ca96b"} Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.209275 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5985cc949b-rw6ms" event={"ID":"292b02ee-fd80-4582-a1fa-ef9aa27c941c","Type":"ContainerStarted","Data":"65f7d180d450adbd6e76bb8c60190d28bf35f3790a8c0dc8f89ff373f1f330a7"} Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.214512 5043 generic.go:334] "Generic (PLEG): container finished" podID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerID="5fd33eaef3475f820343a971afd2a4ed7c6f63d82ac0506cfd71df353ea58c67" exitCode=0 Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.215497 5043 generic.go:334] "Generic (PLEG): container finished" podID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerID="ca615206e43a29a00ef8cce621d97a07234623d0a1504cd9aa6d6d6c99e8c563" exitCode=2 Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.215580 5043 generic.go:334] "Generic (PLEG): container finished" podID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerID="454fdb2f0f73c6597e7069e12b806804c5e1271ca9ad8bd486816d448f18de78" exitCode=0 Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.214675 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9","Type":"ContainerDied","Data":"5fd33eaef3475f820343a971afd2a4ed7c6f63d82ac0506cfd71df353ea58c67"} Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.216722 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9","Type":"ContainerDied","Data":"ca615206e43a29a00ef8cce621d97a07234623d0a1504cd9aa6d6d6c99e8c563"} Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.216836 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9","Type":"ContainerDied","Data":"454fdb2f0f73c6597e7069e12b806804c5e1271ca9ad8bd486816d448f18de78"} Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.219778 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-586d64c99c-q5jk2" event={"ID":"6e087724-2bb8-47c4-9687-cd1e82fb5a1f","Type":"ContainerStarted","Data":"37693fa6b0ea8b4b73a63abccf659c77aa62f666576850ed70a54d2d8a5c6eed"} Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.219852 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-586d64c99c-q5jk2" event={"ID":"6e087724-2bb8-47c4-9687-cd1e82fb5a1f","Type":"ContainerStarted","Data":"4b1f22b723b97c0d29d665cdda9da6020672f503918c52ef5723f9f12f31212c"} Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.219868 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-586d64c99c-q5jk2" event={"ID":"6e087724-2bb8-47c4-9687-cd1e82fb5a1f","Type":"ContainerStarted","Data":"12598e1721b9ab567c25821c8121eaab1167421186f1036d0e5c30d17db560a6"} Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.240585 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-jzr78" podStartSLOduration=3.8755944319999998 podStartE2EDuration="1m1.240567324s" podCreationTimestamp="2025-11-25 07:33:16 +0000 UTC" firstStartedPulling="2025-11-25 07:33:17.594836009 +0000 UTC m=+1061.763031730" lastFinishedPulling="2025-11-25 07:34:14.959808901 +0000 UTC m=+1119.128004622" observedRunningTime="2025-11-25 07:34:17.231531341 +0000 UTC m=+1121.399727062" watchObservedRunningTime="2025-11-25 07:34:17.240567324 +0000 UTC m=+1121.408763045" Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.249098 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-586d64c99c-q5jk2" podStartSLOduration=3.249076722 podStartE2EDuration="3.249076722s" podCreationTimestamp="2025-11-25 07:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:34:17.248193288 +0000 UTC m=+1121.416389009" watchObservedRunningTime="2025-11-25 07:34:17.249076722 +0000 UTC m=+1121.417272443" Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.310200 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:34:17 crc kubenswrapper[5043]: I1125 07:34:17.310589 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.158333 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.229792 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7776d59f89-pzprs" event={"ID":"9d44a94a-e868-48c4-91ce-58b2290badc9","Type":"ContainerStarted","Data":"b039e5141642fb12c28d1d89aecd0646837df085d8a700f1518cb758ef80713c"} Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.230138 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.234840 5043 generic.go:334] "Generic (PLEG): container finished" podID="cddd01e7-479d-4917-a3e7-914cf051fcd0" containerID="9e4805cb391773500632e38c9d12656e8fa86efb50cd12e6d14ee1cf90011b1f" exitCode=143 Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.234897 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774d658888-zs7d4" event={"ID":"cddd01e7-479d-4917-a3e7-914cf051fcd0","Type":"ContainerDied","Data":"9e4805cb391773500632e38c9d12656e8fa86efb50cd12e6d14ee1cf90011b1f"} Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.238207 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5985cc949b-rw6ms" event={"ID":"292b02ee-fd80-4582-a1fa-ef9aa27c941c","Type":"ContainerStarted","Data":"66e337fda13ba9e0a42874c1e4e4fe0fefefa6d6826e67bc20cc96b283c06eb0"} Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.239197 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.239276 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-config-data\") pod \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.239318 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-sg-core-conf-yaml\") pod \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.239395 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-log-httpd\") pod \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.239458 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-combined-ca-bundle\") pod \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.239482 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-scripts\") pod \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.239507 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-run-httpd\") pod \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.239536 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8nhj\" (UniqueName: \"kubernetes.io/projected/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-kube-api-access-v8nhj\") pod \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\" (UID: \"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9\") " Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.239947 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" (UID: "7e7f3fb9-2f36-43e8-913f-ece0a0934cf9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.240716 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" (UID: "7e7f3fb9-2f36-43e8-913f-ece0a0934cf9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.246751 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-scripts" (OuterVolumeSpecName: "scripts") pod "7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" (UID: "7e7f3fb9-2f36-43e8-913f-ece0a0934cf9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.246777 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-kube-api-access-v8nhj" (OuterVolumeSpecName: "kube-api-access-v8nhj") pod "7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" (UID: "7e7f3fb9-2f36-43e8-913f-ece0a0934cf9"). InnerVolumeSpecName "kube-api-access-v8nhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.249391 5043 generic.go:334] "Generic (PLEG): container finished" podID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerID="5835f75e81ebc5781555280ba2b8c5a0b9c0b4cda0beaeced5e4309980a2fe70" exitCode=0 Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.250290 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.250311 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9","Type":"ContainerDied","Data":"5835f75e81ebc5781555280ba2b8c5a0b9c0b4cda0beaeced5e4309980a2fe70"} Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.250338 5043 scope.go:117] "RemoveContainer" containerID="5fd33eaef3475f820343a971afd2a4ed7c6f63d82ac0506cfd71df353ea58c67" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.250900 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.250935 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7f3fb9-2f36-43e8-913f-ece0a0934cf9","Type":"ContainerDied","Data":"e9cf9522bccdec51db944ac1244e0013d95c86ec39e90b11686af5cd9096ff79"} Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.252726 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7776d59f89-pzprs" podStartSLOduration=7.252708036 podStartE2EDuration="7.252708036s" podCreationTimestamp="2025-11-25 07:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:34:18.250206899 +0000 UTC m=+1122.418402620" watchObservedRunningTime="2025-11-25 07:34:18.252708036 +0000 UTC m=+1122.420903747" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.274402 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5985cc949b-rw6ms" podStartSLOduration=7.274386047 podStartE2EDuration="7.274386047s" podCreationTimestamp="2025-11-25 07:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:34:18.27300126 +0000 UTC m=+1122.441196991" watchObservedRunningTime="2025-11-25 07:34:18.274386047 +0000 UTC m=+1122.442581768" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.320529 5043 scope.go:117] "RemoveContainer" containerID="ca615206e43a29a00ef8cce621d97a07234623d0a1504cd9aa6d6d6c99e8c563" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.336136 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" (UID: "7e7f3fb9-2f36-43e8-913f-ece0a0934cf9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.345948 5043 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.345990 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.346003 5043 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.346015 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8nhj\" (UniqueName: \"kubernetes.io/projected/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-kube-api-access-v8nhj\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.346028 5043 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.353725 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" (UID: "7e7f3fb9-2f36-43e8-913f-ece0a0934cf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.355552 5043 scope.go:117] "RemoveContainer" containerID="5835f75e81ebc5781555280ba2b8c5a0b9c0b4cda0beaeced5e4309980a2fe70" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.391193 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-config-data" (OuterVolumeSpecName: "config-data") pod "7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" (UID: "7e7f3fb9-2f36-43e8-913f-ece0a0934cf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.398139 5043 scope.go:117] "RemoveContainer" containerID="454fdb2f0f73c6597e7069e12b806804c5e1271ca9ad8bd486816d448f18de78" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.416114 5043 scope.go:117] "RemoveContainer" containerID="5fd33eaef3475f820343a971afd2a4ed7c6f63d82ac0506cfd71df353ea58c67" Nov 25 07:34:18 crc kubenswrapper[5043]: E1125 07:34:18.416584 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fd33eaef3475f820343a971afd2a4ed7c6f63d82ac0506cfd71df353ea58c67\": container with ID starting with 5fd33eaef3475f820343a971afd2a4ed7c6f63d82ac0506cfd71df353ea58c67 not found: ID does not exist" containerID="5fd33eaef3475f820343a971afd2a4ed7c6f63d82ac0506cfd71df353ea58c67" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.416630 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fd33eaef3475f820343a971afd2a4ed7c6f63d82ac0506cfd71df353ea58c67"} err="failed to get container status \"5fd33eaef3475f820343a971afd2a4ed7c6f63d82ac0506cfd71df353ea58c67\": rpc error: code = NotFound desc = could not find container \"5fd33eaef3475f820343a971afd2a4ed7c6f63d82ac0506cfd71df353ea58c67\": container with ID starting with 5fd33eaef3475f820343a971afd2a4ed7c6f63d82ac0506cfd71df353ea58c67 not found: ID does not exist" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.416650 5043 scope.go:117] "RemoveContainer" containerID="ca615206e43a29a00ef8cce621d97a07234623d0a1504cd9aa6d6d6c99e8c563" Nov 25 07:34:18 crc kubenswrapper[5043]: E1125 07:34:18.416913 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca615206e43a29a00ef8cce621d97a07234623d0a1504cd9aa6d6d6c99e8c563\": container with ID starting with ca615206e43a29a00ef8cce621d97a07234623d0a1504cd9aa6d6d6c99e8c563 not found: ID does not exist" containerID="ca615206e43a29a00ef8cce621d97a07234623d0a1504cd9aa6d6d6c99e8c563" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.416932 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca615206e43a29a00ef8cce621d97a07234623d0a1504cd9aa6d6d6c99e8c563"} err="failed to get container status \"ca615206e43a29a00ef8cce621d97a07234623d0a1504cd9aa6d6d6c99e8c563\": rpc error: code = NotFound desc = could not find container \"ca615206e43a29a00ef8cce621d97a07234623d0a1504cd9aa6d6d6c99e8c563\": container with ID starting with ca615206e43a29a00ef8cce621d97a07234623d0a1504cd9aa6d6d6c99e8c563 not found: ID does not exist" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.416945 5043 scope.go:117] "RemoveContainer" containerID="5835f75e81ebc5781555280ba2b8c5a0b9c0b4cda0beaeced5e4309980a2fe70" Nov 25 07:34:18 crc kubenswrapper[5043]: E1125 07:34:18.417336 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5835f75e81ebc5781555280ba2b8c5a0b9c0b4cda0beaeced5e4309980a2fe70\": container with ID starting with 5835f75e81ebc5781555280ba2b8c5a0b9c0b4cda0beaeced5e4309980a2fe70 not found: ID does not exist" containerID="5835f75e81ebc5781555280ba2b8c5a0b9c0b4cda0beaeced5e4309980a2fe70" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.417357 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5835f75e81ebc5781555280ba2b8c5a0b9c0b4cda0beaeced5e4309980a2fe70"} err="failed to get container status \"5835f75e81ebc5781555280ba2b8c5a0b9c0b4cda0beaeced5e4309980a2fe70\": rpc error: code = NotFound desc = could not find container \"5835f75e81ebc5781555280ba2b8c5a0b9c0b4cda0beaeced5e4309980a2fe70\": container with ID starting with 5835f75e81ebc5781555280ba2b8c5a0b9c0b4cda0beaeced5e4309980a2fe70 not found: ID does not exist" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.417370 5043 scope.go:117] "RemoveContainer" containerID="454fdb2f0f73c6597e7069e12b806804c5e1271ca9ad8bd486816d448f18de78" Nov 25 07:34:18 crc kubenswrapper[5043]: E1125 07:34:18.417568 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"454fdb2f0f73c6597e7069e12b806804c5e1271ca9ad8bd486816d448f18de78\": container with ID starting with 454fdb2f0f73c6597e7069e12b806804c5e1271ca9ad8bd486816d448f18de78 not found: ID does not exist" containerID="454fdb2f0f73c6597e7069e12b806804c5e1271ca9ad8bd486816d448f18de78" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.417585 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454fdb2f0f73c6597e7069e12b806804c5e1271ca9ad8bd486816d448f18de78"} err="failed to get container status \"454fdb2f0f73c6597e7069e12b806804c5e1271ca9ad8bd486816d448f18de78\": rpc error: code = NotFound desc = could not find container \"454fdb2f0f73c6597e7069e12b806804c5e1271ca9ad8bd486816d448f18de78\": container with ID starting with 454fdb2f0f73c6597e7069e12b806804c5e1271ca9ad8bd486816d448f18de78 not found: ID does not exist" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.447449 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.447478 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.606634 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.615388 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.637098 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:18 crc kubenswrapper[5043]: E1125 07:34:18.637521 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerName="proxy-httpd" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.637542 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerName="proxy-httpd" Nov 25 07:34:18 crc kubenswrapper[5043]: E1125 07:34:18.637561 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerName="ceilometer-central-agent" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.637569 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerName="ceilometer-central-agent" Nov 25 07:34:18 crc kubenswrapper[5043]: E1125 07:34:18.637585 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerName="sg-core" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.637598 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerName="sg-core" Nov 25 07:34:18 crc kubenswrapper[5043]: E1125 07:34:18.637642 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerName="ceilometer-notification-agent" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.637653 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerName="ceilometer-notification-agent" Nov 25 07:34:18 crc kubenswrapper[5043]: E1125 07:34:18.637673 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1a4fec-6d8d-4ff8-a015-eaf438d76965" containerName="dnsmasq-dns" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.637681 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1a4fec-6d8d-4ff8-a015-eaf438d76965" containerName="dnsmasq-dns" Nov 25 07:34:18 crc kubenswrapper[5043]: E1125 07:34:18.637693 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1a4fec-6d8d-4ff8-a015-eaf438d76965" containerName="init" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.637701 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1a4fec-6d8d-4ff8-a015-eaf438d76965" containerName="init" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.637904 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerName="ceilometer-notification-agent" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.637931 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd1a4fec-6d8d-4ff8-a015-eaf438d76965" containerName="dnsmasq-dns" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.637954 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerName="ceilometer-central-agent" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.637964 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerName="proxy-httpd" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.637974 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" containerName="sg-core" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.640791 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.653263 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.653750 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.655816 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.754733 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56r22\" (UniqueName: \"kubernetes.io/projected/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-kube-api-access-56r22\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.754787 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.754824 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-scripts\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.754857 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.754881 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-config-data\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.754920 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-log-httpd\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.754959 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-run-httpd\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.856680 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56r22\" (UniqueName: \"kubernetes.io/projected/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-kube-api-access-56r22\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.856735 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.856770 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-scripts\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.856809 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.856831 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-config-data\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.856871 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-log-httpd\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.856900 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-run-httpd\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.857409 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-run-httpd\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.858709 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-log-httpd\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.861127 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.861937 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-scripts\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.872185 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-config-data\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.872581 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.875068 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56r22\" (UniqueName: \"kubernetes.io/projected/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-kube-api-access-56r22\") pod \"ceilometer-0\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.972379 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:34:18 crc kubenswrapper[5043]: I1125 07:34:18.974507 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e7f3fb9-2f36-43e8-913f-ece0a0934cf9" path="/var/lib/kubelet/pods/7e7f3fb9-2f36-43e8-913f-ece0a0934cf9/volumes" Nov 25 07:34:19 crc kubenswrapper[5043]: I1125 07:34:19.448218 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:19 crc kubenswrapper[5043]: W1125 07:34:19.453748 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f092b8e_ae0d_4b52_ac0d_323473fceb5e.slice/crio-705b8aa36e01ad10c17ad9615b1cd274605ed598a739a22dd9ee2a3db312c888 WatchSource:0}: Error finding container 705b8aa36e01ad10c17ad9615b1cd274605ed598a739a22dd9ee2a3db312c888: Status 404 returned error can't find the container with id 705b8aa36e01ad10c17ad9615b1cd274605ed598a739a22dd9ee2a3db312c888 Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.298440 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f092b8e-ae0d-4b52-ac0d-323473fceb5e","Type":"ContainerStarted","Data":"6872a52a7564973bb862d9ea40d093259c4d2680eb90edcbd4bc429e0f2e248a"} Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.298858 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f092b8e-ae0d-4b52-ac0d-323473fceb5e","Type":"ContainerStarted","Data":"705b8aa36e01ad10c17ad9615b1cd274605ed598a739a22dd9ee2a3db312c888"} Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.324272 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-774d658888-zs7d4" podUID="cddd01e7-479d-4917-a3e7-914cf051fcd0" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.149:9311/healthcheck\": read tcp 10.217.0.2:53804->10.217.0.149:9311: read: connection reset by peer" Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.324925 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-774d658888-zs7d4" podUID="cddd01e7-479d-4917-a3e7-914cf051fcd0" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.149:9311/healthcheck\": read tcp 10.217.0.2:53810->10.217.0.149:9311: read: connection reset by peer" Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.837089 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.895595 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-combined-ca-bundle\") pod \"cddd01e7-479d-4917-a3e7-914cf051fcd0\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.895699 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cddd01e7-479d-4917-a3e7-914cf051fcd0-logs\") pod \"cddd01e7-479d-4917-a3e7-914cf051fcd0\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.895730 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvb8w\" (UniqueName: \"kubernetes.io/projected/cddd01e7-479d-4917-a3e7-914cf051fcd0-kube-api-access-hvb8w\") pod \"cddd01e7-479d-4917-a3e7-914cf051fcd0\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.895850 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-config-data-custom\") pod \"cddd01e7-479d-4917-a3e7-914cf051fcd0\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.895879 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-config-data\") pod \"cddd01e7-479d-4917-a3e7-914cf051fcd0\" (UID: \"cddd01e7-479d-4917-a3e7-914cf051fcd0\") " Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.896929 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cddd01e7-479d-4917-a3e7-914cf051fcd0-logs" (OuterVolumeSpecName: "logs") pod "cddd01e7-479d-4917-a3e7-914cf051fcd0" (UID: "cddd01e7-479d-4917-a3e7-914cf051fcd0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.902999 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cddd01e7-479d-4917-a3e7-914cf051fcd0" (UID: "cddd01e7-479d-4917-a3e7-914cf051fcd0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.906107 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cddd01e7-479d-4917-a3e7-914cf051fcd0-kube-api-access-hvb8w" (OuterVolumeSpecName: "kube-api-access-hvb8w") pod "cddd01e7-479d-4917-a3e7-914cf051fcd0" (UID: "cddd01e7-479d-4917-a3e7-914cf051fcd0"). InnerVolumeSpecName "kube-api-access-hvb8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.972796 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cddd01e7-479d-4917-a3e7-914cf051fcd0" (UID: "cddd01e7-479d-4917-a3e7-914cf051fcd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.997994 5043 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.998027 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.998043 5043 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cddd01e7-479d-4917-a3e7-914cf051fcd0-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.998055 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvb8w\" (UniqueName: \"kubernetes.io/projected/cddd01e7-479d-4917-a3e7-914cf051fcd0-kube-api-access-hvb8w\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:20 crc kubenswrapper[5043]: I1125 07:34:20.998705 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-config-data" (OuterVolumeSpecName: "config-data") pod "cddd01e7-479d-4917-a3e7-914cf051fcd0" (UID: "cddd01e7-479d-4917-a3e7-914cf051fcd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:21 crc kubenswrapper[5043]: I1125 07:34:21.099573 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cddd01e7-479d-4917-a3e7-914cf051fcd0-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:21 crc kubenswrapper[5043]: I1125 07:34:21.309633 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f092b8e-ae0d-4b52-ac0d-323473fceb5e","Type":"ContainerStarted","Data":"4319d22e85ba6a8f91458a4d12bbb6ca13bf354322efa800a1ae699390708d5d"} Nov 25 07:34:21 crc kubenswrapper[5043]: I1125 07:34:21.311399 5043 generic.go:334] "Generic (PLEG): container finished" podID="cddd01e7-479d-4917-a3e7-914cf051fcd0" containerID="b9897c40811d196acce9fa7c4386b05529caafdbe5476a14f496848e499ddc6b" exitCode=0 Nov 25 07:34:21 crc kubenswrapper[5043]: I1125 07:34:21.311442 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774d658888-zs7d4" event={"ID":"cddd01e7-479d-4917-a3e7-914cf051fcd0","Type":"ContainerDied","Data":"b9897c40811d196acce9fa7c4386b05529caafdbe5476a14f496848e499ddc6b"} Nov 25 07:34:21 crc kubenswrapper[5043]: I1125 07:34:21.311490 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774d658888-zs7d4" event={"ID":"cddd01e7-479d-4917-a3e7-914cf051fcd0","Type":"ContainerDied","Data":"ff50d4f1bc85d719c316e3c8400b27faa4ca00827b870219e2e5527d73b548ec"} Nov 25 07:34:21 crc kubenswrapper[5043]: I1125 07:34:21.311509 5043 scope.go:117] "RemoveContainer" containerID="b9897c40811d196acce9fa7c4386b05529caafdbe5476a14f496848e499ddc6b" Nov 25 07:34:21 crc kubenswrapper[5043]: I1125 07:34:21.311460 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-774d658888-zs7d4" Nov 25 07:34:21 crc kubenswrapper[5043]: I1125 07:34:21.342954 5043 scope.go:117] "RemoveContainer" containerID="9e4805cb391773500632e38c9d12656e8fa86efb50cd12e6d14ee1cf90011b1f" Nov 25 07:34:21 crc kubenswrapper[5043]: I1125 07:34:21.420964 5043 scope.go:117] "RemoveContainer" containerID="b9897c40811d196acce9fa7c4386b05529caafdbe5476a14f496848e499ddc6b" Nov 25 07:34:21 crc kubenswrapper[5043]: E1125 07:34:21.421383 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9897c40811d196acce9fa7c4386b05529caafdbe5476a14f496848e499ddc6b\": container with ID starting with b9897c40811d196acce9fa7c4386b05529caafdbe5476a14f496848e499ddc6b not found: ID does not exist" containerID="b9897c40811d196acce9fa7c4386b05529caafdbe5476a14f496848e499ddc6b" Nov 25 07:34:21 crc kubenswrapper[5043]: I1125 07:34:21.421420 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9897c40811d196acce9fa7c4386b05529caafdbe5476a14f496848e499ddc6b"} err="failed to get container status \"b9897c40811d196acce9fa7c4386b05529caafdbe5476a14f496848e499ddc6b\": rpc error: code = NotFound desc = could not find container \"b9897c40811d196acce9fa7c4386b05529caafdbe5476a14f496848e499ddc6b\": container with ID starting with b9897c40811d196acce9fa7c4386b05529caafdbe5476a14f496848e499ddc6b not found: ID does not exist" Nov 25 07:34:21 crc kubenswrapper[5043]: I1125 07:34:21.421451 5043 scope.go:117] "RemoveContainer" containerID="9e4805cb391773500632e38c9d12656e8fa86efb50cd12e6d14ee1cf90011b1f" Nov 25 07:34:21 crc kubenswrapper[5043]: E1125 07:34:21.421814 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e4805cb391773500632e38c9d12656e8fa86efb50cd12e6d14ee1cf90011b1f\": container with ID starting with 9e4805cb391773500632e38c9d12656e8fa86efb50cd12e6d14ee1cf90011b1f not found: ID does not exist" containerID="9e4805cb391773500632e38c9d12656e8fa86efb50cd12e6d14ee1cf90011b1f" Nov 25 07:34:21 crc kubenswrapper[5043]: I1125 07:34:21.421861 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e4805cb391773500632e38c9d12656e8fa86efb50cd12e6d14ee1cf90011b1f"} err="failed to get container status \"9e4805cb391773500632e38c9d12656e8fa86efb50cd12e6d14ee1cf90011b1f\": rpc error: code = NotFound desc = could not find container \"9e4805cb391773500632e38c9d12656e8fa86efb50cd12e6d14ee1cf90011b1f\": container with ID starting with 9e4805cb391773500632e38c9d12656e8fa86efb50cd12e6d14ee1cf90011b1f not found: ID does not exist" Nov 25 07:34:21 crc kubenswrapper[5043]: I1125 07:34:21.422206 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-774d658888-zs7d4"] Nov 25 07:34:21 crc kubenswrapper[5043]: I1125 07:34:21.429495 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-774d658888-zs7d4"] Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.133783 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.207686 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f8f5cc67-bj7r7"] Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.211685 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" podUID="0f3fb8fb-1900-4301-92da-6665f0007d7e" containerName="dnsmasq-dns" containerID="cri-o://8a8b9353ff6d88c1bf6e611406bf15d298c57793c1f460022c75c6b95af9a995" gracePeriod=10 Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.323344 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f092b8e-ae0d-4b52-ac0d-323473fceb5e","Type":"ContainerStarted","Data":"466a080090b48a3fefe6fb076d14921ca4061e98889272038841578991ec2c7f"} Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.325659 5043 generic.go:334] "Generic (PLEG): container finished" podID="2de64291-b46f-4ba3-bdec-a3bad5873881" containerID="75f2592db3c5a8441facc75e0de58769459996f5d7f711900a2a1957128adaa9" exitCode=0 Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.325758 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jzr78" event={"ID":"2de64291-b46f-4ba3-bdec-a3bad5873881","Type":"ContainerDied","Data":"75f2592db3c5a8441facc75e0de58769459996f5d7f711900a2a1957128adaa9"} Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.729879 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.846804 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-ovsdbserver-sb\") pod \"0f3fb8fb-1900-4301-92da-6665f0007d7e\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.847186 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-config\") pod \"0f3fb8fb-1900-4301-92da-6665f0007d7e\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.847261 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-ovsdbserver-nb\") pod \"0f3fb8fb-1900-4301-92da-6665f0007d7e\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.847308 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-dns-svc\") pod \"0f3fb8fb-1900-4301-92da-6665f0007d7e\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.847388 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czlcf\" (UniqueName: \"kubernetes.io/projected/0f3fb8fb-1900-4301-92da-6665f0007d7e-kube-api-access-czlcf\") pod \"0f3fb8fb-1900-4301-92da-6665f0007d7e\" (UID: \"0f3fb8fb-1900-4301-92da-6665f0007d7e\") " Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.867938 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f3fb8fb-1900-4301-92da-6665f0007d7e-kube-api-access-czlcf" (OuterVolumeSpecName: "kube-api-access-czlcf") pod "0f3fb8fb-1900-4301-92da-6665f0007d7e" (UID: "0f3fb8fb-1900-4301-92da-6665f0007d7e"). InnerVolumeSpecName "kube-api-access-czlcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.894475 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0f3fb8fb-1900-4301-92da-6665f0007d7e" (UID: "0f3fb8fb-1900-4301-92da-6665f0007d7e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.904109 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-config" (OuterVolumeSpecName: "config") pod "0f3fb8fb-1900-4301-92da-6665f0007d7e" (UID: "0f3fb8fb-1900-4301-92da-6665f0007d7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.917173 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0f3fb8fb-1900-4301-92da-6665f0007d7e" (UID: "0f3fb8fb-1900-4301-92da-6665f0007d7e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.919120 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0f3fb8fb-1900-4301-92da-6665f0007d7e" (UID: "0f3fb8fb-1900-4301-92da-6665f0007d7e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.949545 5043 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.949582 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czlcf\" (UniqueName: \"kubernetes.io/projected/0f3fb8fb-1900-4301-92da-6665f0007d7e-kube-api-access-czlcf\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.949597 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.949633 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.949645 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f3fb8fb-1900-4301-92da-6665f0007d7e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:22 crc kubenswrapper[5043]: I1125 07:34:22.992103 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cddd01e7-479d-4917-a3e7-914cf051fcd0" path="/var/lib/kubelet/pods/cddd01e7-479d-4917-a3e7-914cf051fcd0/volumes" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.336783 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f092b8e-ae0d-4b52-ac0d-323473fceb5e","Type":"ContainerStarted","Data":"4c754e9d66c57aae5fd89388556b160c4afc152c85dc3d6800e687d4e890e4ec"} Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.337962 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.340294 5043 generic.go:334] "Generic (PLEG): container finished" podID="0f3fb8fb-1900-4301-92da-6665f0007d7e" containerID="8a8b9353ff6d88c1bf6e611406bf15d298c57793c1f460022c75c6b95af9a995" exitCode=0 Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.340490 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.341142 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" event={"ID":"0f3fb8fb-1900-4301-92da-6665f0007d7e","Type":"ContainerDied","Data":"8a8b9353ff6d88c1bf6e611406bf15d298c57793c1f460022c75c6b95af9a995"} Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.341186 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" event={"ID":"0f3fb8fb-1900-4301-92da-6665f0007d7e","Type":"ContainerDied","Data":"c5be10fa51c182f2c08a9ade8330a3735d177dd1e5bb134c10cc6b8226e315a6"} Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.341203 5043 scope.go:117] "RemoveContainer" containerID="8a8b9353ff6d88c1bf6e611406bf15d298c57793c1f460022c75c6b95af9a995" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.373532 5043 scope.go:117] "RemoveContainer" containerID="6054d4e4c2bebd11facdccac435b38e7313dcca13e32a605a69c649d4040e271" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.374150 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.302282189 podStartE2EDuration="5.374133505s" podCreationTimestamp="2025-11-25 07:34:18 +0000 UTC" firstStartedPulling="2025-11-25 07:34:19.456647732 +0000 UTC m=+1123.624843473" lastFinishedPulling="2025-11-25 07:34:22.528499068 +0000 UTC m=+1126.696694789" observedRunningTime="2025-11-25 07:34:23.363308086 +0000 UTC m=+1127.531503817" watchObservedRunningTime="2025-11-25 07:34:23.374133505 +0000 UTC m=+1127.542329226" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.397376 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f8f5cc67-bj7r7"] Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.404480 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f8f5cc67-bj7r7"] Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.404564 5043 scope.go:117] "RemoveContainer" containerID="8a8b9353ff6d88c1bf6e611406bf15d298c57793c1f460022c75c6b95af9a995" Nov 25 07:34:23 crc kubenswrapper[5043]: E1125 07:34:23.405098 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a8b9353ff6d88c1bf6e611406bf15d298c57793c1f460022c75c6b95af9a995\": container with ID starting with 8a8b9353ff6d88c1bf6e611406bf15d298c57793c1f460022c75c6b95af9a995 not found: ID does not exist" containerID="8a8b9353ff6d88c1bf6e611406bf15d298c57793c1f460022c75c6b95af9a995" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.405149 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a8b9353ff6d88c1bf6e611406bf15d298c57793c1f460022c75c6b95af9a995"} err="failed to get container status \"8a8b9353ff6d88c1bf6e611406bf15d298c57793c1f460022c75c6b95af9a995\": rpc error: code = NotFound desc = could not find container \"8a8b9353ff6d88c1bf6e611406bf15d298c57793c1f460022c75c6b95af9a995\": container with ID starting with 8a8b9353ff6d88c1bf6e611406bf15d298c57793c1f460022c75c6b95af9a995 not found: ID does not exist" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.405177 5043 scope.go:117] "RemoveContainer" containerID="6054d4e4c2bebd11facdccac435b38e7313dcca13e32a605a69c649d4040e271" Nov 25 07:34:23 crc kubenswrapper[5043]: E1125 07:34:23.405459 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6054d4e4c2bebd11facdccac435b38e7313dcca13e32a605a69c649d4040e271\": container with ID starting with 6054d4e4c2bebd11facdccac435b38e7313dcca13e32a605a69c649d4040e271 not found: ID does not exist" containerID="6054d4e4c2bebd11facdccac435b38e7313dcca13e32a605a69c649d4040e271" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.405497 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6054d4e4c2bebd11facdccac435b38e7313dcca13e32a605a69c649d4040e271"} err="failed to get container status \"6054d4e4c2bebd11facdccac435b38e7313dcca13e32a605a69c649d4040e271\": rpc error: code = NotFound desc = could not find container \"6054d4e4c2bebd11facdccac435b38e7313dcca13e32a605a69c649d4040e271\": container with ID starting with 6054d4e4c2bebd11facdccac435b38e7313dcca13e32a605a69c649d4040e271 not found: ID does not exist" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.632049 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jzr78" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.764904 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-combined-ca-bundle\") pod \"2de64291-b46f-4ba3-bdec-a3bad5873881\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.764950 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-scripts\") pod \"2de64291-b46f-4ba3-bdec-a3bad5873881\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.765000 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-db-sync-config-data\") pod \"2de64291-b46f-4ba3-bdec-a3bad5873881\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.765047 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-config-data\") pod \"2de64291-b46f-4ba3-bdec-a3bad5873881\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.765063 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2de64291-b46f-4ba3-bdec-a3bad5873881-etc-machine-id\") pod \"2de64291-b46f-4ba3-bdec-a3bad5873881\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.765308 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lkch\" (UniqueName: \"kubernetes.io/projected/2de64291-b46f-4ba3-bdec-a3bad5873881-kube-api-access-7lkch\") pod \"2de64291-b46f-4ba3-bdec-a3bad5873881\" (UID: \"2de64291-b46f-4ba3-bdec-a3bad5873881\") " Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.765752 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de64291-b46f-4ba3-bdec-a3bad5873881-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2de64291-b46f-4ba3-bdec-a3bad5873881" (UID: "2de64291-b46f-4ba3-bdec-a3bad5873881"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.766302 5043 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2de64291-b46f-4ba3-bdec-a3bad5873881-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.771003 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-scripts" (OuterVolumeSpecName: "scripts") pod "2de64291-b46f-4ba3-bdec-a3bad5873881" (UID: "2de64291-b46f-4ba3-bdec-a3bad5873881"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.787560 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2de64291-b46f-4ba3-bdec-a3bad5873881-kube-api-access-7lkch" (OuterVolumeSpecName: "kube-api-access-7lkch") pod "2de64291-b46f-4ba3-bdec-a3bad5873881" (UID: "2de64291-b46f-4ba3-bdec-a3bad5873881"). InnerVolumeSpecName "kube-api-access-7lkch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.787834 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2de64291-b46f-4ba3-bdec-a3bad5873881" (UID: "2de64291-b46f-4ba3-bdec-a3bad5873881"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.808999 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2de64291-b46f-4ba3-bdec-a3bad5873881" (UID: "2de64291-b46f-4ba3-bdec-a3bad5873881"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.817115 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-config-data" (OuterVolumeSpecName: "config-data") pod "2de64291-b46f-4ba3-bdec-a3bad5873881" (UID: "2de64291-b46f-4ba3-bdec-a3bad5873881"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.868574 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.868665 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.868678 5043 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.868690 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2de64291-b46f-4ba3-bdec-a3bad5873881-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:23 crc kubenswrapper[5043]: I1125 07:34:23.868735 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lkch\" (UniqueName: \"kubernetes.io/projected/2de64291-b46f-4ba3-bdec-a3bad5873881-kube-api-access-7lkch\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.354637 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jzr78" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.354669 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jzr78" event={"ID":"2de64291-b46f-4ba3-bdec-a3bad5873881","Type":"ContainerDied","Data":"e04cac920939a082c45c511754431868f16232d2938bfd58addf0aab4cc417c8"} Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.356211 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e04cac920939a082c45c511754431868f16232d2938bfd58addf0aab4cc417c8" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.629821 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:34:24 crc kubenswrapper[5043]: E1125 07:34:24.630178 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cddd01e7-479d-4917-a3e7-914cf051fcd0" containerName="barbican-api-log" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.630195 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="cddd01e7-479d-4917-a3e7-914cf051fcd0" containerName="barbican-api-log" Nov 25 07:34:24 crc kubenswrapper[5043]: E1125 07:34:24.630207 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2de64291-b46f-4ba3-bdec-a3bad5873881" containerName="cinder-db-sync" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.630213 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="2de64291-b46f-4ba3-bdec-a3bad5873881" containerName="cinder-db-sync" Nov 25 07:34:24 crc kubenswrapper[5043]: E1125 07:34:24.630223 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cddd01e7-479d-4917-a3e7-914cf051fcd0" containerName="barbican-api" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.630229 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="cddd01e7-479d-4917-a3e7-914cf051fcd0" containerName="barbican-api" Nov 25 07:34:24 crc kubenswrapper[5043]: E1125 07:34:24.630243 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f3fb8fb-1900-4301-92da-6665f0007d7e" containerName="init" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.630249 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f3fb8fb-1900-4301-92da-6665f0007d7e" containerName="init" Nov 25 07:34:24 crc kubenswrapper[5043]: E1125 07:34:24.630263 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f3fb8fb-1900-4301-92da-6665f0007d7e" containerName="dnsmasq-dns" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.630269 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f3fb8fb-1900-4301-92da-6665f0007d7e" containerName="dnsmasq-dns" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.630411 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="cddd01e7-479d-4917-a3e7-914cf051fcd0" containerName="barbican-api" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.630423 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="2de64291-b46f-4ba3-bdec-a3bad5873881" containerName="cinder-db-sync" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.630445 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="cddd01e7-479d-4917-a3e7-914cf051fcd0" containerName="barbican-api-log" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.630455 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f3fb8fb-1900-4301-92da-6665f0007d7e" containerName="dnsmasq-dns" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.631287 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.634108 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ctkqr" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.634178 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.634321 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.634373 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.667118 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bc89f58d7-rsnk9"] Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.668622 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.697633 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.730667 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bc89f58d7-rsnk9"] Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.785553 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.785631 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-config\") pod \"dnsmasq-dns-7bc89f58d7-rsnk9\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.785677 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04826333-7613-4f2c-b240-4864828ddcc0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.785697 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-scripts\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.785796 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-ovsdbserver-sb\") pod \"dnsmasq-dns-7bc89f58d7-rsnk9\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.785830 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsfxw\" (UniqueName: \"kubernetes.io/projected/64e4603e-878b-49db-9ed3-c4980a27a768-kube-api-access-tsfxw\") pod \"dnsmasq-dns-7bc89f58d7-rsnk9\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.785883 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58x84\" (UniqueName: \"kubernetes.io/projected/04826333-7613-4f2c-b240-4864828ddcc0-kube-api-access-58x84\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.785908 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.785970 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-dns-svc\") pod \"dnsmasq-dns-7bc89f58d7-rsnk9\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.786007 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-ovsdbserver-nb\") pod \"dnsmasq-dns-7bc89f58d7-rsnk9\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.786028 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-config-data\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.827748 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.829119 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.831560 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.882514 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.886975 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.887012 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-config\") pod \"dnsmasq-dns-7bc89f58d7-rsnk9\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.887081 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04826333-7613-4f2c-b240-4864828ddcc0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.887097 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-scripts\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.887137 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-ovsdbserver-sb\") pod \"dnsmasq-dns-7bc89f58d7-rsnk9\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.887156 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsfxw\" (UniqueName: \"kubernetes.io/projected/64e4603e-878b-49db-9ed3-c4980a27a768-kube-api-access-tsfxw\") pod \"dnsmasq-dns-7bc89f58d7-rsnk9\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.887182 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58x84\" (UniqueName: \"kubernetes.io/projected/04826333-7613-4f2c-b240-4864828ddcc0-kube-api-access-58x84\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.887200 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.887233 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04826333-7613-4f2c-b240-4864828ddcc0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.887245 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-dns-svc\") pod \"dnsmasq-dns-7bc89f58d7-rsnk9\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.887582 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-config-data\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.887634 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-ovsdbserver-nb\") pod \"dnsmasq-dns-7bc89f58d7-rsnk9\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.887892 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-config\") pod \"dnsmasq-dns-7bc89f58d7-rsnk9\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.889077 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-ovsdbserver-nb\") pod \"dnsmasq-dns-7bc89f58d7-rsnk9\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.889191 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-ovsdbserver-sb\") pod \"dnsmasq-dns-7bc89f58d7-rsnk9\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.889551 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-dns-svc\") pod \"dnsmasq-dns-7bc89f58d7-rsnk9\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.893696 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.894685 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-config-data\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.906246 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-scripts\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.907160 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.911549 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsfxw\" (UniqueName: \"kubernetes.io/projected/64e4603e-878b-49db-9ed3-c4980a27a768-kube-api-access-tsfxw\") pod \"dnsmasq-dns-7bc89f58d7-rsnk9\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.916300 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58x84\" (UniqueName: \"kubernetes.io/projected/04826333-7613-4f2c-b240-4864828ddcc0-kube-api-access-58x84\") pod \"cinder-scheduler-0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.961985 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.989420 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-config-data\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.989661 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bh5b\" (UniqueName: \"kubernetes.io/projected/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-kube-api-access-9bh5b\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.989691 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.989723 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-logs\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.989756 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-config-data-custom\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.989803 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-scripts\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.989830 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-etc-machine-id\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.995030 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f3fb8fb-1900-4301-92da-6665f0007d7e" path="/var/lib/kubelet/pods/0f3fb8fb-1900-4301-92da-6665f0007d7e/volumes" Nov 25 07:34:24 crc kubenswrapper[5043]: I1125 07:34:24.996940 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.092336 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-config-data\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.092547 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bh5b\" (UniqueName: \"kubernetes.io/projected/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-kube-api-access-9bh5b\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.092781 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.092857 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-logs\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.093042 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-config-data-custom\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.093160 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-scripts\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.093250 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-etc-machine-id\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.093483 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-etc-machine-id\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.096417 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-logs\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.102128 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.111721 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-config-data\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.164893 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-scripts\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.170202 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bh5b\" (UniqueName: \"kubernetes.io/projected/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-kube-api-access-9bh5b\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.170225 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-config-data-custom\") pod \"cinder-api-0\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " pod="openstack/cinder-api-0" Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.189499 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.484974 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bc89f58d7-rsnk9"] Nov 25 07:34:25 crc kubenswrapper[5043]: W1125 07:34:25.499420 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64e4603e_878b_49db_9ed3_c4980a27a768.slice/crio-7db53cd15158629e49681b9c7877ae0ef0bd15d0a25a8262bea395d6d68bd47d WatchSource:0}: Error finding container 7db53cd15158629e49681b9c7877ae0ef0bd15d0a25a8262bea395d6d68bd47d: Status 404 returned error can't find the container with id 7db53cd15158629e49681b9c7877ae0ef0bd15d0a25a8262bea395d6d68bd47d Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.512037 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:34:25 crc kubenswrapper[5043]: W1125 07:34:25.523230 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04826333_7613_4f2c_b240_4864828ddcc0.slice/crio-e04ce6ecf97989c27298d5c07bb7286a68739b4805fec38476ffe685964e9c21 WatchSource:0}: Error finding container e04ce6ecf97989c27298d5c07bb7286a68739b4805fec38476ffe685964e9c21: Status 404 returned error can't find the container with id e04ce6ecf97989c27298d5c07bb7286a68739b4805fec38476ffe685964e9c21 Nov 25 07:34:25 crc kubenswrapper[5043]: I1125 07:34:25.708574 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:34:25 crc kubenswrapper[5043]: W1125 07:34:25.711778 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73c0ffb1_a41b_4c6c_b5e4_ca32149c5277.slice/crio-4060b4e5ed585ce17d6f3f4e4ee0e5c894b995d08db847934cb30325cbcdd398 WatchSource:0}: Error finding container 4060b4e5ed585ce17d6f3f4e4ee0e5c894b995d08db847934cb30325cbcdd398: Status 404 returned error can't find the container with id 4060b4e5ed585ce17d6f3f4e4ee0e5c894b995d08db847934cb30325cbcdd398 Nov 25 07:34:26 crc kubenswrapper[5043]: I1125 07:34:26.419731 5043 generic.go:334] "Generic (PLEG): container finished" podID="64e4603e-878b-49db-9ed3-c4980a27a768" containerID="530903458d682719b1fb1bbcb60b98d3742740971db9c65339e64fcbee76d52d" exitCode=0 Nov 25 07:34:26 crc kubenswrapper[5043]: I1125 07:34:26.420088 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" event={"ID":"64e4603e-878b-49db-9ed3-c4980a27a768","Type":"ContainerDied","Data":"530903458d682719b1fb1bbcb60b98d3742740971db9c65339e64fcbee76d52d"} Nov 25 07:34:26 crc kubenswrapper[5043]: I1125 07:34:26.420120 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" event={"ID":"64e4603e-878b-49db-9ed3-c4980a27a768","Type":"ContainerStarted","Data":"7db53cd15158629e49681b9c7877ae0ef0bd15d0a25a8262bea395d6d68bd47d"} Nov 25 07:34:26 crc kubenswrapper[5043]: I1125 07:34:26.426377 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277","Type":"ContainerStarted","Data":"4060b4e5ed585ce17d6f3f4e4ee0e5c894b995d08db847934cb30325cbcdd398"} Nov 25 07:34:26 crc kubenswrapper[5043]: I1125 07:34:26.430143 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04826333-7613-4f2c-b240-4864828ddcc0","Type":"ContainerStarted","Data":"e04ce6ecf97989c27298d5c07bb7286a68739b4805fec38476ffe685964e9c21"} Nov 25 07:34:26 crc kubenswrapper[5043]: I1125 07:34:26.916613 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:34:27 crc kubenswrapper[5043]: I1125 07:34:27.442887 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04826333-7613-4f2c-b240-4864828ddcc0","Type":"ContainerStarted","Data":"f673ba9a874f3c7289ec37505a0f9c4c6362732c0a72ee7a33e084ef94c9f5dc"} Nov 25 07:34:27 crc kubenswrapper[5043]: I1125 07:34:27.445028 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277","Type":"ContainerStarted","Data":"743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2"} Nov 25 07:34:27 crc kubenswrapper[5043]: I1125 07:34:27.446727 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7f8f5cc67-bj7r7" podUID="0f3fb8fb-1900-4301-92da-6665f0007d7e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Nov 25 07:34:27 crc kubenswrapper[5043]: I1125 07:34:27.448229 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" event={"ID":"64e4603e-878b-49db-9ed3-c4980a27a768","Type":"ContainerStarted","Data":"e6b6b0755f5b1b89458a8c6c97874bccc41b54b5ba3aecb45bb8b35bff6ad801"} Nov 25 07:34:27 crc kubenswrapper[5043]: I1125 07:34:27.449159 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:27 crc kubenswrapper[5043]: I1125 07:34:27.470050 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" podStartSLOduration=3.470030195 podStartE2EDuration="3.470030195s" podCreationTimestamp="2025-11-25 07:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:34:27.469209442 +0000 UTC m=+1131.637405173" watchObservedRunningTime="2025-11-25 07:34:27.470030195 +0000 UTC m=+1131.638225916" Nov 25 07:34:28 crc kubenswrapper[5043]: I1125 07:34:28.133279 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:34:28 crc kubenswrapper[5043]: I1125 07:34:28.300919 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:34:28 crc kubenswrapper[5043]: I1125 07:34:28.459488 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04826333-7613-4f2c-b240-4864828ddcc0","Type":"ContainerStarted","Data":"084c32711fec1c59945c375767b896d0dec1af29e34c96b2dab7ec944e4ac19b"} Nov 25 07:34:28 crc kubenswrapper[5043]: I1125 07:34:28.463727 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" containerName="cinder-api-log" containerID="cri-o://743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2" gracePeriod=30 Nov 25 07:34:28 crc kubenswrapper[5043]: I1125 07:34:28.464079 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277","Type":"ContainerStarted","Data":"22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982"} Nov 25 07:34:28 crc kubenswrapper[5043]: I1125 07:34:28.464167 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 07:34:28 crc kubenswrapper[5043]: I1125 07:34:28.464223 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" containerName="cinder-api" containerID="cri-o://22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982" gracePeriod=30 Nov 25 07:34:28 crc kubenswrapper[5043]: I1125 07:34:28.499500 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.466084729 podStartE2EDuration="4.499481801s" podCreationTimestamp="2025-11-25 07:34:24 +0000 UTC" firstStartedPulling="2025-11-25 07:34:25.525990301 +0000 UTC m=+1129.694186022" lastFinishedPulling="2025-11-25 07:34:26.559387373 +0000 UTC m=+1130.727583094" observedRunningTime="2025-11-25 07:34:28.49384322 +0000 UTC m=+1132.662038951" watchObservedRunningTime="2025-11-25 07:34:28.499481801 +0000 UTC m=+1132.667677522" Nov 25 07:34:28 crc kubenswrapper[5043]: I1125 07:34:28.520542 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.520523445 podStartE2EDuration="4.520523445s" podCreationTimestamp="2025-11-25 07:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:34:28.51961927 +0000 UTC m=+1132.687814991" watchObservedRunningTime="2025-11-25 07:34:28.520523445 +0000 UTC m=+1132.688719166" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.115850 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.181568 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-etc-machine-id\") pod \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.181692 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-combined-ca-bundle\") pod \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.181827 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-scripts\") pod \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.181869 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-config-data\") pod \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.181891 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-config-data-custom\") pod \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.181918 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-logs\") pod \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.182005 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bh5b\" (UniqueName: \"kubernetes.io/projected/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-kube-api-access-9bh5b\") pod \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\" (UID: \"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277\") " Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.183384 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" (UID: "73c0ffb1-a41b-4c6c-b5e4-ca32149c5277"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.184911 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-logs" (OuterVolumeSpecName: "logs") pod "73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" (UID: "73c0ffb1-a41b-4c6c-b5e4-ca32149c5277"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.189877 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-scripts" (OuterVolumeSpecName: "scripts") pod "73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" (UID: "73c0ffb1-a41b-4c6c-b5e4-ca32149c5277"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.190150 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" (UID: "73c0ffb1-a41b-4c6c-b5e4-ca32149c5277"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.207779 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-kube-api-access-9bh5b" (OuterVolumeSpecName: "kube-api-access-9bh5b") pod "73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" (UID: "73c0ffb1-a41b-4c6c-b5e4-ca32149c5277"). InnerVolumeSpecName "kube-api-access-9bh5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.222816 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" (UID: "73c0ffb1-a41b-4c6c-b5e4-ca32149c5277"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.237696 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-config-data" (OuterVolumeSpecName: "config-data") pod "73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" (UID: "73c0ffb1-a41b-4c6c-b5e4-ca32149c5277"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.286319 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.286416 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.286469 5043 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.286530 5043 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.286579 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bh5b\" (UniqueName: \"kubernetes.io/projected/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-kube-api-access-9bh5b\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.286648 5043 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.286698 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.472984 5043 generic.go:334] "Generic (PLEG): container finished" podID="73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" containerID="22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982" exitCode=0 Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.473019 5043 generic.go:334] "Generic (PLEG): container finished" podID="73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" containerID="743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2" exitCode=143 Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.473034 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.473059 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277","Type":"ContainerDied","Data":"22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982"} Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.473097 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277","Type":"ContainerDied","Data":"743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2"} Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.473110 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"73c0ffb1-a41b-4c6c-b5e4-ca32149c5277","Type":"ContainerDied","Data":"4060b4e5ed585ce17d6f3f4e4ee0e5c894b995d08db847934cb30325cbcdd398"} Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.473128 5043 scope.go:117] "RemoveContainer" containerID="22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.504200 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.523510 5043 scope.go:117] "RemoveContainer" containerID="743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.525502 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.537768 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:34:29 crc kubenswrapper[5043]: E1125 07:34:29.538184 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" containerName="cinder-api-log" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.538199 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" containerName="cinder-api-log" Nov 25 07:34:29 crc kubenswrapper[5043]: E1125 07:34:29.538240 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" containerName="cinder-api" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.538250 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" containerName="cinder-api" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.538469 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" containerName="cinder-api" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.538481 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" containerName="cinder-api-log" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.539650 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.542654 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.542831 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.547490 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.547643 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.548055 5043 scope.go:117] "RemoveContainer" containerID="22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982" Nov 25 07:34:29 crc kubenswrapper[5043]: E1125 07:34:29.548460 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982\": container with ID starting with 22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982 not found: ID does not exist" containerID="22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.548495 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982"} err="failed to get container status \"22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982\": rpc error: code = NotFound desc = could not find container \"22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982\": container with ID starting with 22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982 not found: ID does not exist" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.548524 5043 scope.go:117] "RemoveContainer" containerID="743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2" Nov 25 07:34:29 crc kubenswrapper[5043]: E1125 07:34:29.548919 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2\": container with ID starting with 743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2 not found: ID does not exist" containerID="743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.548945 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2"} err="failed to get container status \"743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2\": rpc error: code = NotFound desc = could not find container \"743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2\": container with ID starting with 743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2 not found: ID does not exist" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.548959 5043 scope.go:117] "RemoveContainer" containerID="22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.549217 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982"} err="failed to get container status \"22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982\": rpc error: code = NotFound desc = could not find container \"22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982\": container with ID starting with 22d54bd1f1cc7475fbd67ff8a0c9c1a84ca6665cda22092c691a74fdbb6dd982 not found: ID does not exist" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.549243 5043 scope.go:117] "RemoveContainer" containerID="743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.549527 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2"} err="failed to get container status \"743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2\": rpc error: code = NotFound desc = could not find container \"743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2\": container with ID starting with 743cd5264c0a1ab4985d63769d4bb982ca83c55c8660b99919027029dbd7dfb2 not found: ID does not exist" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.692756 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9afb63aa-ce0f-4365-a4cb-4fd593537095-logs\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.692799 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9afb63aa-ce0f-4365-a4cb-4fd593537095-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.692830 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-config-data-custom\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.692998 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh9lq\" (UniqueName: \"kubernetes.io/projected/9afb63aa-ce0f-4365-a4cb-4fd593537095-kube-api-access-hh9lq\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.693072 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-config-data\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.693117 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.693176 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.693273 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-scripts\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.693390 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.794867 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.794971 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9afb63aa-ce0f-4365-a4cb-4fd593537095-logs\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.795004 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9afb63aa-ce0f-4365-a4cb-4fd593537095-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.795038 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-config-data-custom\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.795069 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh9lq\" (UniqueName: \"kubernetes.io/projected/9afb63aa-ce0f-4365-a4cb-4fd593537095-kube-api-access-hh9lq\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.795094 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-config-data\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.795115 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.795148 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.795187 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-scripts\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.795204 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9afb63aa-ce0f-4365-a4cb-4fd593537095-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.795617 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9afb63aa-ce0f-4365-a4cb-4fd593537095-logs\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.799070 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.799161 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-config-data-custom\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.799888 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-scripts\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.800174 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-config-data\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.800295 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.801101 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9afb63aa-ce0f-4365-a4cb-4fd593537095-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.811260 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh9lq\" (UniqueName: \"kubernetes.io/projected/9afb63aa-ce0f-4365-a4cb-4fd593537095-kube-api-access-hh9lq\") pod \"cinder-api-0\" (UID: \"9afb63aa-ce0f-4365-a4cb-4fd593537095\") " pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.942633 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.962411 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 07:34:29 crc kubenswrapper[5043]: I1125 07:34:29.987787 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:34:30 crc kubenswrapper[5043]: I1125 07:34:30.207595 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5f67c4b5d4-f96jj" Nov 25 07:34:30 crc kubenswrapper[5043]: I1125 07:34:30.268278 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b5bc7cfb-2sfts"] Nov 25 07:34:30 crc kubenswrapper[5043]: W1125 07:34:30.430760 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9afb63aa_ce0f_4365_a4cb_4fd593537095.slice/crio-427d2a9b1233898c68dbf9c58a415993735360c6457929be366d1a0455719001 WatchSource:0}: Error finding container 427d2a9b1233898c68dbf9c58a415993735360c6457929be366d1a0455719001: Status 404 returned error can't find the container with id 427d2a9b1233898c68dbf9c58a415993735360c6457929be366d1a0455719001 Nov 25 07:34:30 crc kubenswrapper[5043]: I1125 07:34:30.433202 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 07:34:30 crc kubenswrapper[5043]: I1125 07:34:30.488731 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9afb63aa-ce0f-4365-a4cb-4fd593537095","Type":"ContainerStarted","Data":"427d2a9b1233898c68dbf9c58a415993735360c6457929be366d1a0455719001"} Nov 25 07:34:30 crc kubenswrapper[5043]: I1125 07:34:30.491176 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-b5bc7cfb-2sfts" podUID="9955ab7e-1d74-461a-a9b2-73e9f82d48fe" containerName="horizon-log" containerID="cri-o://7c5e903c6193703b2095fdf682cae9d61dc7437177c9fae106df613eaa3dfb94" gracePeriod=30 Nov 25 07:34:30 crc kubenswrapper[5043]: I1125 07:34:30.491487 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-b5bc7cfb-2sfts" podUID="9955ab7e-1d74-461a-a9b2-73e9f82d48fe" containerName="horizon" containerID="cri-o://97259a18989a2bb9b137543868fb1784b4ec1f8d5e0c1ed6ac4074b3fb57c7c5" gracePeriod=30 Nov 25 07:34:30 crc kubenswrapper[5043]: I1125 07:34:30.980474 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73c0ffb1-a41b-4c6c-b5e4-ca32149c5277" path="/var/lib/kubelet/pods/73c0ffb1-a41b-4c6c-b5e4-ca32149c5277/volumes" Nov 25 07:34:31 crc kubenswrapper[5043]: I1125 07:34:31.506407 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9afb63aa-ce0f-4365-a4cb-4fd593537095","Type":"ContainerStarted","Data":"65544f0debe6a534178db8f1d1c9a69cabe42bb30e171eee1cee7a0087045857"} Nov 25 07:34:32 crc kubenswrapper[5043]: I1125 07:34:32.521557 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9afb63aa-ce0f-4365-a4cb-4fd593537095","Type":"ContainerStarted","Data":"9f64960f23210bb6c701adc7bc3ed7b50be9c60b8b2f202174b46f4e17dd0637"} Nov 25 07:34:32 crc kubenswrapper[5043]: I1125 07:34:32.521992 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 07:34:32 crc kubenswrapper[5043]: I1125 07:34:32.554434 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.554412821 podStartE2EDuration="3.554412821s" podCreationTimestamp="2025-11-25 07:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:34:32.546473017 +0000 UTC m=+1136.714668758" watchObservedRunningTime="2025-11-25 07:34:32.554412821 +0000 UTC m=+1136.722608542" Nov 25 07:34:33 crc kubenswrapper[5043]: I1125 07:34:33.125957 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:33 crc kubenswrapper[5043]: I1125 07:34:33.127492 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-78c76fd9c4-8nvkz" Nov 25 07:34:33 crc kubenswrapper[5043]: E1125 07:34:33.703307 5043 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9955ab7e_1d74_461a_a9b2_73e9f82d48fe.slice/crio-97259a18989a2bb9b137543868fb1784b4ec1f8d5e0c1ed6ac4074b3fb57c7c5.scope\": RecentStats: unable to find data in memory cache]" Nov 25 07:34:34 crc kubenswrapper[5043]: I1125 07:34:34.259445 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-79d9bc7db7-xzxqf" Nov 25 07:34:34 crc kubenswrapper[5043]: I1125 07:34:34.540007 5043 generic.go:334] "Generic (PLEG): container finished" podID="9955ab7e-1d74-461a-a9b2-73e9f82d48fe" containerID="97259a18989a2bb9b137543868fb1784b4ec1f8d5e0c1ed6ac4074b3fb57c7c5" exitCode=0 Nov 25 07:34:34 crc kubenswrapper[5043]: I1125 07:34:34.540230 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b5bc7cfb-2sfts" event={"ID":"9955ab7e-1d74-461a-a9b2-73e9f82d48fe","Type":"ContainerDied","Data":"97259a18989a2bb9b137543868fb1784b4ec1f8d5e0c1ed6ac4074b3fb57c7c5"} Nov 25 07:34:34 crc kubenswrapper[5043]: I1125 07:34:34.998778 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.078957 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7776d59f89-pzprs"] Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.079237 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7776d59f89-pzprs" podUID="9d44a94a-e868-48c4-91ce-58b2290badc9" containerName="dnsmasq-dns" containerID="cri-o://b039e5141642fb12c28d1d89aecd0646837df085d8a700f1518cb758ef80713c" gracePeriod=10 Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.247831 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.287392 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.371676 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.373292 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.375549 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-hqx5n" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.376044 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.376259 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.384119 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.503773 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9bcf9848-5bdf-4760-829f-a92e4015ab70-openstack-config\") pod \"openstackclient\" (UID: \"9bcf9848-5bdf-4760-829f-a92e4015ab70\") " pod="openstack/openstackclient" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.503840 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bcf9848-5bdf-4760-829f-a92e4015ab70-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9bcf9848-5bdf-4760-829f-a92e4015ab70\") " pod="openstack/openstackclient" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.503875 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9bcf9848-5bdf-4760-829f-a92e4015ab70-openstack-config-secret\") pod \"openstackclient\" (UID: \"9bcf9848-5bdf-4760-829f-a92e4015ab70\") " pod="openstack/openstackclient" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.503969 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s58tn\" (UniqueName: \"kubernetes.io/projected/9bcf9848-5bdf-4760-829f-a92e4015ab70-kube-api-access-s58tn\") pod \"openstackclient\" (UID: \"9bcf9848-5bdf-4760-829f-a92e4015ab70\") " pod="openstack/openstackclient" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.551297 5043 generic.go:334] "Generic (PLEG): container finished" podID="9d44a94a-e868-48c4-91ce-58b2290badc9" containerID="b039e5141642fb12c28d1d89aecd0646837df085d8a700f1518cb758ef80713c" exitCode=0 Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.551569 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="04826333-7613-4f2c-b240-4864828ddcc0" containerName="cinder-scheduler" containerID="cri-o://f673ba9a874f3c7289ec37505a0f9c4c6362732c0a72ee7a33e084ef94c9f5dc" gracePeriod=30 Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.551884 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7776d59f89-pzprs" event={"ID":"9d44a94a-e868-48c4-91ce-58b2290badc9","Type":"ContainerDied","Data":"b039e5141642fb12c28d1d89aecd0646837df085d8a700f1518cb758ef80713c"} Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.552184 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="04826333-7613-4f2c-b240-4864828ddcc0" containerName="probe" containerID="cri-o://084c32711fec1c59945c375767b896d0dec1af29e34c96b2dab7ec944e4ac19b" gracePeriod=30 Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.605739 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s58tn\" (UniqueName: \"kubernetes.io/projected/9bcf9848-5bdf-4760-829f-a92e4015ab70-kube-api-access-s58tn\") pod \"openstackclient\" (UID: \"9bcf9848-5bdf-4760-829f-a92e4015ab70\") " pod="openstack/openstackclient" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.606151 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9bcf9848-5bdf-4760-829f-a92e4015ab70-openstack-config\") pod \"openstackclient\" (UID: \"9bcf9848-5bdf-4760-829f-a92e4015ab70\") " pod="openstack/openstackclient" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.606196 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bcf9848-5bdf-4760-829f-a92e4015ab70-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9bcf9848-5bdf-4760-829f-a92e4015ab70\") " pod="openstack/openstackclient" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.606236 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9bcf9848-5bdf-4760-829f-a92e4015ab70-openstack-config-secret\") pod \"openstackclient\" (UID: \"9bcf9848-5bdf-4760-829f-a92e4015ab70\") " pod="openstack/openstackclient" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.607118 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9bcf9848-5bdf-4760-829f-a92e4015ab70-openstack-config\") pod \"openstackclient\" (UID: \"9bcf9848-5bdf-4760-829f-a92e4015ab70\") " pod="openstack/openstackclient" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.615219 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9bcf9848-5bdf-4760-829f-a92e4015ab70-openstack-config-secret\") pod \"openstackclient\" (UID: \"9bcf9848-5bdf-4760-829f-a92e4015ab70\") " pod="openstack/openstackclient" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.615379 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bcf9848-5bdf-4760-829f-a92e4015ab70-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9bcf9848-5bdf-4760-829f-a92e4015ab70\") " pod="openstack/openstackclient" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.631004 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s58tn\" (UniqueName: \"kubernetes.io/projected/9bcf9848-5bdf-4760-829f-a92e4015ab70-kube-api-access-s58tn\") pod \"openstackclient\" (UID: \"9bcf9848-5bdf-4760-829f-a92e4015ab70\") " pod="openstack/openstackclient" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.688246 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.698337 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.820217 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-config\") pod \"9d44a94a-e868-48c4-91ce-58b2290badc9\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.820311 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkpt2\" (UniqueName: \"kubernetes.io/projected/9d44a94a-e868-48c4-91ce-58b2290badc9-kube-api-access-jkpt2\") pod \"9d44a94a-e868-48c4-91ce-58b2290badc9\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.820394 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-ovsdbserver-sb\") pod \"9d44a94a-e868-48c4-91ce-58b2290badc9\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.820434 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-ovsdbserver-nb\") pod \"9d44a94a-e868-48c4-91ce-58b2290badc9\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.820454 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-dns-svc\") pod \"9d44a94a-e868-48c4-91ce-58b2290badc9\" (UID: \"9d44a94a-e868-48c4-91ce-58b2290badc9\") " Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.827880 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d44a94a-e868-48c4-91ce-58b2290badc9-kube-api-access-jkpt2" (OuterVolumeSpecName: "kube-api-access-jkpt2") pod "9d44a94a-e868-48c4-91ce-58b2290badc9" (UID: "9d44a94a-e868-48c4-91ce-58b2290badc9"). InnerVolumeSpecName "kube-api-access-jkpt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.880288 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d44a94a-e868-48c4-91ce-58b2290badc9" (UID: "9d44a94a-e868-48c4-91ce-58b2290badc9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.883202 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-config" (OuterVolumeSpecName: "config") pod "9d44a94a-e868-48c4-91ce-58b2290badc9" (UID: "9d44a94a-e868-48c4-91ce-58b2290badc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.885694 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d44a94a-e868-48c4-91ce-58b2290badc9" (UID: "9d44a94a-e868-48c4-91ce-58b2290badc9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.886925 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-b5bc7cfb-2sfts" podUID="9955ab7e-1d74-461a-a9b2-73e9f82d48fe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.894978 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d44a94a-e868-48c4-91ce-58b2290badc9" (UID: "9d44a94a-e868-48c4-91ce-58b2290badc9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.925186 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.925224 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkpt2\" (UniqueName: \"kubernetes.io/projected/9d44a94a-e868-48c4-91ce-58b2290badc9-kube-api-access-jkpt2\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.925238 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.925250 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:35 crc kubenswrapper[5043]: I1125 07:34:35.925263 5043 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d44a94a-e868-48c4-91ce-58b2290badc9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:36 crc kubenswrapper[5043]: I1125 07:34:36.228661 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 07:34:36 crc kubenswrapper[5043]: W1125 07:34:36.231241 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bcf9848_5bdf_4760_829f_a92e4015ab70.slice/crio-cd20b9c6ba07091a899387355e84f9a0fbb98348f24070f4fc336698f0fbf135 WatchSource:0}: Error finding container cd20b9c6ba07091a899387355e84f9a0fbb98348f24070f4fc336698f0fbf135: Status 404 returned error can't find the container with id cd20b9c6ba07091a899387355e84f9a0fbb98348f24070f4fc336698f0fbf135 Nov 25 07:34:36 crc kubenswrapper[5043]: I1125 07:34:36.560758 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9bcf9848-5bdf-4760-829f-a92e4015ab70","Type":"ContainerStarted","Data":"cd20b9c6ba07091a899387355e84f9a0fbb98348f24070f4fc336698f0fbf135"} Nov 25 07:34:36 crc kubenswrapper[5043]: I1125 07:34:36.563238 5043 generic.go:334] "Generic (PLEG): container finished" podID="04826333-7613-4f2c-b240-4864828ddcc0" containerID="084c32711fec1c59945c375767b896d0dec1af29e34c96b2dab7ec944e4ac19b" exitCode=0 Nov 25 07:34:36 crc kubenswrapper[5043]: I1125 07:34:36.563314 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04826333-7613-4f2c-b240-4864828ddcc0","Type":"ContainerDied","Data":"084c32711fec1c59945c375767b896d0dec1af29e34c96b2dab7ec944e4ac19b"} Nov 25 07:34:36 crc kubenswrapper[5043]: I1125 07:34:36.565300 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7776d59f89-pzprs" event={"ID":"9d44a94a-e868-48c4-91ce-58b2290badc9","Type":"ContainerDied","Data":"12c50f78a5a95888fa48ebc6285d7d58917ee63da488d54b4902bcd64467ecef"} Nov 25 07:34:36 crc kubenswrapper[5043]: I1125 07:34:36.565343 5043 scope.go:117] "RemoveContainer" containerID="b039e5141642fb12c28d1d89aecd0646837df085d8a700f1518cb758ef80713c" Nov 25 07:34:36 crc kubenswrapper[5043]: I1125 07:34:36.565367 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7776d59f89-pzprs" Nov 25 07:34:36 crc kubenswrapper[5043]: I1125 07:34:36.595432 5043 scope.go:117] "RemoveContainer" containerID="22b0efd3b2128f3d16ab0acaa995f9745adaea0ed4ffb55b75e53f4233dd72fe" Nov 25 07:34:36 crc kubenswrapper[5043]: I1125 07:34:36.604050 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7776d59f89-pzprs"] Nov 25 07:34:36 crc kubenswrapper[5043]: I1125 07:34:36.629656 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7776d59f89-pzprs"] Nov 25 07:34:37 crc kubenswrapper[5043]: I1125 07:34:37.009247 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d44a94a-e868-48c4-91ce-58b2290badc9" path="/var/lib/kubelet/pods/9d44a94a-e868-48c4-91ce-58b2290badc9/volumes" Nov 25 07:34:38 crc kubenswrapper[5043]: I1125 07:34:38.587516 5043 generic.go:334] "Generic (PLEG): container finished" podID="04826333-7613-4f2c-b240-4864828ddcc0" containerID="f673ba9a874f3c7289ec37505a0f9c4c6362732c0a72ee7a33e084ef94c9f5dc" exitCode=0 Nov 25 07:34:38 crc kubenswrapper[5043]: I1125 07:34:38.587675 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04826333-7613-4f2c-b240-4864828ddcc0","Type":"ContainerDied","Data":"f673ba9a874f3c7289ec37505a0f9c4c6362732c0a72ee7a33e084ef94c9f5dc"} Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.319387 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.493185 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04826333-7613-4f2c-b240-4864828ddcc0-etc-machine-id\") pod \"04826333-7613-4f2c-b240-4864828ddcc0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.493294 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-config-data-custom\") pod \"04826333-7613-4f2c-b240-4864828ddcc0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.493303 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04826333-7613-4f2c-b240-4864828ddcc0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "04826333-7613-4f2c-b240-4864828ddcc0" (UID: "04826333-7613-4f2c-b240-4864828ddcc0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.493359 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-scripts\") pod \"04826333-7613-4f2c-b240-4864828ddcc0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.493444 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-combined-ca-bundle\") pod \"04826333-7613-4f2c-b240-4864828ddcc0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.493489 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-config-data\") pod \"04826333-7613-4f2c-b240-4864828ddcc0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.493553 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58x84\" (UniqueName: \"kubernetes.io/projected/04826333-7613-4f2c-b240-4864828ddcc0-kube-api-access-58x84\") pod \"04826333-7613-4f2c-b240-4864828ddcc0\" (UID: \"04826333-7613-4f2c-b240-4864828ddcc0\") " Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.493981 5043 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04826333-7613-4f2c-b240-4864828ddcc0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.500149 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04826333-7613-4f2c-b240-4864828ddcc0-kube-api-access-58x84" (OuterVolumeSpecName: "kube-api-access-58x84") pod "04826333-7613-4f2c-b240-4864828ddcc0" (UID: "04826333-7613-4f2c-b240-4864828ddcc0"). InnerVolumeSpecName "kube-api-access-58x84". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.501760 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-scripts" (OuterVolumeSpecName: "scripts") pod "04826333-7613-4f2c-b240-4864828ddcc0" (UID: "04826333-7613-4f2c-b240-4864828ddcc0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.502277 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "04826333-7613-4f2c-b240-4864828ddcc0" (UID: "04826333-7613-4f2c-b240-4864828ddcc0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.566784 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04826333-7613-4f2c-b240-4864828ddcc0" (UID: "04826333-7613-4f2c-b240-4864828ddcc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.596533 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.596570 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58x84\" (UniqueName: \"kubernetes.io/projected/04826333-7613-4f2c-b240-4864828ddcc0-kube-api-access-58x84\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.596589 5043 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.596617 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.603305 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"04826333-7613-4f2c-b240-4864828ddcc0","Type":"ContainerDied","Data":"e04ce6ecf97989c27298d5c07bb7286a68739b4805fec38476ffe685964e9c21"} Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.603348 5043 scope.go:117] "RemoveContainer" containerID="084c32711fec1c59945c375767b896d0dec1af29e34c96b2dab7ec944e4ac19b" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.603395 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.650759 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-config-data" (OuterVolumeSpecName: "config-data") pod "04826333-7613-4f2c-b240-4864828ddcc0" (UID: "04826333-7613-4f2c-b240-4864828ddcc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.663347 5043 scope.go:117] "RemoveContainer" containerID="f673ba9a874f3c7289ec37505a0f9c4c6362732c0a72ee7a33e084ef94c9f5dc" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.698527 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04826333-7613-4f2c-b240-4864828ddcc0-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.959438 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.980730 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.989653 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:34:39 crc kubenswrapper[5043]: E1125 07:34:39.990069 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d44a94a-e868-48c4-91ce-58b2290badc9" containerName="init" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.990092 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d44a94a-e868-48c4-91ce-58b2290badc9" containerName="init" Nov 25 07:34:39 crc kubenswrapper[5043]: E1125 07:34:39.990110 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04826333-7613-4f2c-b240-4864828ddcc0" containerName="cinder-scheduler" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.990121 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="04826333-7613-4f2c-b240-4864828ddcc0" containerName="cinder-scheduler" Nov 25 07:34:39 crc kubenswrapper[5043]: E1125 07:34:39.990141 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04826333-7613-4f2c-b240-4864828ddcc0" containerName="probe" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.990149 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="04826333-7613-4f2c-b240-4864828ddcc0" containerName="probe" Nov 25 07:34:39 crc kubenswrapper[5043]: E1125 07:34:39.990185 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d44a94a-e868-48c4-91ce-58b2290badc9" containerName="dnsmasq-dns" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.990193 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d44a94a-e868-48c4-91ce-58b2290badc9" containerName="dnsmasq-dns" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.990387 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="04826333-7613-4f2c-b240-4864828ddcc0" containerName="cinder-scheduler" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.990412 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d44a94a-e868-48c4-91ce-58b2290badc9" containerName="dnsmasq-dns" Nov 25 07:34:39 crc kubenswrapper[5043]: I1125 07:34:39.990425 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="04826333-7613-4f2c-b240-4864828ddcc0" containerName="probe" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:39.998369 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.001213 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.005242 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.105387 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99eb43d-cf17-44a4-beeb-f5222c978039-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.105448 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c99eb43d-cf17-44a4-beeb-f5222c978039-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.105636 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh9qt\" (UniqueName: \"kubernetes.io/projected/c99eb43d-cf17-44a4-beeb-f5222c978039-kube-api-access-hh9qt\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.105731 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c99eb43d-cf17-44a4-beeb-f5222c978039-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.105771 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99eb43d-cf17-44a4-beeb-f5222c978039-config-data\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.105899 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c99eb43d-cf17-44a4-beeb-f5222c978039-scripts\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.207939 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c99eb43d-cf17-44a4-beeb-f5222c978039-scripts\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.208047 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99eb43d-cf17-44a4-beeb-f5222c978039-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.208080 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c99eb43d-cf17-44a4-beeb-f5222c978039-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.208118 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh9qt\" (UniqueName: \"kubernetes.io/projected/c99eb43d-cf17-44a4-beeb-f5222c978039-kube-api-access-hh9qt\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.208146 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c99eb43d-cf17-44a4-beeb-f5222c978039-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.208161 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99eb43d-cf17-44a4-beeb-f5222c978039-config-data\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.208286 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c99eb43d-cf17-44a4-beeb-f5222c978039-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.236848 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c99eb43d-cf17-44a4-beeb-f5222c978039-scripts\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.236880 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh9qt\" (UniqueName: \"kubernetes.io/projected/c99eb43d-cf17-44a4-beeb-f5222c978039-kube-api-access-hh9qt\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.237023 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c99eb43d-cf17-44a4-beeb-f5222c978039-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.238422 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99eb43d-cf17-44a4-beeb-f5222c978039-config-data\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.238821 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99eb43d-cf17-44a4-beeb-f5222c978039-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c99eb43d-cf17-44a4-beeb-f5222c978039\") " pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.353977 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 07:34:40 crc kubenswrapper[5043]: I1125 07:34:40.973124 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04826333-7613-4f2c-b240-4864828ddcc0" path="/var/lib/kubelet/pods/04826333-7613-4f2c-b240-4864828ddcc0/volumes" Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.328210 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.328469 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerName="ceilometer-central-agent" containerID="cri-o://6872a52a7564973bb862d9ea40d093259c4d2680eb90edcbd4bc429e0f2e248a" gracePeriod=30 Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.328537 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerName="sg-core" containerID="cri-o://466a080090b48a3fefe6fb076d14921ca4061e98889272038841578991ec2c7f" gracePeriod=30 Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.328584 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerName="proxy-httpd" containerID="cri-o://4c754e9d66c57aae5fd89388556b160c4afc152c85dc3d6800e687d4e890e4ec" gracePeriod=30 Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.328537 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerName="ceilometer-notification-agent" containerID="cri-o://4319d22e85ba6a8f91458a4d12bbb6ca13bf354322efa800a1ae699390708d5d" gracePeriod=30 Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.337430 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.668840 5043 generic.go:334] "Generic (PLEG): container finished" podID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerID="4c754e9d66c57aae5fd89388556b160c4afc152c85dc3d6800e687d4e890e4ec" exitCode=0 Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.668881 5043 generic.go:334] "Generic (PLEG): container finished" podID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerID="466a080090b48a3fefe6fb076d14921ca4061e98889272038841578991ec2c7f" exitCode=2 Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.668904 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f092b8e-ae0d-4b52-ac0d-323473fceb5e","Type":"ContainerDied","Data":"4c754e9d66c57aae5fd89388556b160c4afc152c85dc3d6800e687d4e890e4ec"} Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.668958 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f092b8e-ae0d-4b52-ac0d-323473fceb5e","Type":"ContainerDied","Data":"466a080090b48a3fefe6fb076d14921ca4061e98889272038841578991ec2c7f"} Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.809357 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-ngdf8"] Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.810507 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ngdf8" Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.822849 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ngdf8"] Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.915594 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-pf25q"] Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.918215 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pf25q" Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.930465 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-df70-account-create-l928f"] Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.931598 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-df70-account-create-l928f" Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.935972 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.943250 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9c4413b-cd74-4ad4-978b-089edd47d7b3-operator-scripts\") pod \"nova-api-db-create-ngdf8\" (UID: \"e9c4413b-cd74-4ad4-978b-089edd47d7b3\") " pod="openstack/nova-api-db-create-ngdf8" Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.943301 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pf25q"] Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.943309 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcfwb\" (UniqueName: \"kubernetes.io/projected/e9c4413b-cd74-4ad4-978b-089edd47d7b3-kube-api-access-wcfwb\") pod \"nova-api-db-create-ngdf8\" (UID: \"e9c4413b-cd74-4ad4-978b-089edd47d7b3\") " pod="openstack/nova-api-db-create-ngdf8" Nov 25 07:34:41 crc kubenswrapper[5043]: I1125 07:34:41.968745 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-df70-account-create-l928f"] Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.044720 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93c8ab01-08e3-4b78-a215-c1382e53c98f-operator-scripts\") pod \"nova-api-df70-account-create-l928f\" (UID: \"93c8ab01-08e3-4b78-a215-c1382e53c98f\") " pod="openstack/nova-api-df70-account-create-l928f" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.044777 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9pdq\" (UniqueName: \"kubernetes.io/projected/3801a63c-ffa6-49c8-8bf9-2bafdad18466-kube-api-access-h9pdq\") pod \"nova-cell0-db-create-pf25q\" (UID: \"3801a63c-ffa6-49c8-8bf9-2bafdad18466\") " pod="openstack/nova-cell0-db-create-pf25q" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.044809 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzq5q\" (UniqueName: \"kubernetes.io/projected/93c8ab01-08e3-4b78-a215-c1382e53c98f-kube-api-access-xzq5q\") pod \"nova-api-df70-account-create-l928f\" (UID: \"93c8ab01-08e3-4b78-a215-c1382e53c98f\") " pod="openstack/nova-api-df70-account-create-l928f" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.044861 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9c4413b-cd74-4ad4-978b-089edd47d7b3-operator-scripts\") pod \"nova-api-db-create-ngdf8\" (UID: \"e9c4413b-cd74-4ad4-978b-089edd47d7b3\") " pod="openstack/nova-api-db-create-ngdf8" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.045015 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcfwb\" (UniqueName: \"kubernetes.io/projected/e9c4413b-cd74-4ad4-978b-089edd47d7b3-kube-api-access-wcfwb\") pod \"nova-api-db-create-ngdf8\" (UID: \"e9c4413b-cd74-4ad4-978b-089edd47d7b3\") " pod="openstack/nova-api-db-create-ngdf8" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.045050 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3801a63c-ffa6-49c8-8bf9-2bafdad18466-operator-scripts\") pod \"nova-cell0-db-create-pf25q\" (UID: \"3801a63c-ffa6-49c8-8bf9-2bafdad18466\") " pod="openstack/nova-cell0-db-create-pf25q" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.045806 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9c4413b-cd74-4ad4-978b-089edd47d7b3-operator-scripts\") pod \"nova-api-db-create-ngdf8\" (UID: \"e9c4413b-cd74-4ad4-978b-089edd47d7b3\") " pod="openstack/nova-api-db-create-ngdf8" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.064665 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcfwb\" (UniqueName: \"kubernetes.io/projected/e9c4413b-cd74-4ad4-978b-089edd47d7b3-kube-api-access-wcfwb\") pod \"nova-api-db-create-ngdf8\" (UID: \"e9c4413b-cd74-4ad4-978b-089edd47d7b3\") " pod="openstack/nova-api-db-create-ngdf8" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.129969 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-dpcmk"] Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.131444 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dpcmk" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.136726 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ngdf8" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.147055 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dpcmk"] Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.149176 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3801a63c-ffa6-49c8-8bf9-2bafdad18466-operator-scripts\") pod \"nova-cell0-db-create-pf25q\" (UID: \"3801a63c-ffa6-49c8-8bf9-2bafdad18466\") " pod="openstack/nova-cell0-db-create-pf25q" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.149411 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93c8ab01-08e3-4b78-a215-c1382e53c98f-operator-scripts\") pod \"nova-api-df70-account-create-l928f\" (UID: \"93c8ab01-08e3-4b78-a215-c1382e53c98f\") " pod="openstack/nova-api-df70-account-create-l928f" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.149484 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9pdq\" (UniqueName: \"kubernetes.io/projected/3801a63c-ffa6-49c8-8bf9-2bafdad18466-kube-api-access-h9pdq\") pod \"nova-cell0-db-create-pf25q\" (UID: \"3801a63c-ffa6-49c8-8bf9-2bafdad18466\") " pod="openstack/nova-cell0-db-create-pf25q" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.149505 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzq5q\" (UniqueName: \"kubernetes.io/projected/93c8ab01-08e3-4b78-a215-c1382e53c98f-kube-api-access-xzq5q\") pod \"nova-api-df70-account-create-l928f\" (UID: \"93c8ab01-08e3-4b78-a215-c1382e53c98f\") " pod="openstack/nova-api-df70-account-create-l928f" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.150114 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3801a63c-ffa6-49c8-8bf9-2bafdad18466-operator-scripts\") pod \"nova-cell0-db-create-pf25q\" (UID: \"3801a63c-ffa6-49c8-8bf9-2bafdad18466\") " pod="openstack/nova-cell0-db-create-pf25q" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.150714 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93c8ab01-08e3-4b78-a215-c1382e53c98f-operator-scripts\") pod \"nova-api-df70-account-create-l928f\" (UID: \"93c8ab01-08e3-4b78-a215-c1382e53c98f\") " pod="openstack/nova-api-df70-account-create-l928f" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.158249 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9a3a-account-create-s7vqr"] Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.159674 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9a3a-account-create-s7vqr" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.162029 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.165637 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzq5q\" (UniqueName: \"kubernetes.io/projected/93c8ab01-08e3-4b78-a215-c1382e53c98f-kube-api-access-xzq5q\") pod \"nova-api-df70-account-create-l928f\" (UID: \"93c8ab01-08e3-4b78-a215-c1382e53c98f\") " pod="openstack/nova-api-df70-account-create-l928f" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.165958 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9pdq\" (UniqueName: \"kubernetes.io/projected/3801a63c-ffa6-49c8-8bf9-2bafdad18466-kube-api-access-h9pdq\") pod \"nova-cell0-db-create-pf25q\" (UID: \"3801a63c-ffa6-49c8-8bf9-2bafdad18466\") " pod="openstack/nova-cell0-db-create-pf25q" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.166343 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9a3a-account-create-s7vqr"] Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.236829 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pf25q" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.251088 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cbed283-5e56-4d9b-b749-ab8b4808834e-operator-scripts\") pod \"nova-cell1-db-create-dpcmk\" (UID: \"4cbed283-5e56-4d9b-b749-ab8b4808834e\") " pod="openstack/nova-cell1-db-create-dpcmk" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.251138 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sstrs\" (UniqueName: \"kubernetes.io/projected/b82277a9-e2eb-45db-a8ac-ae5cd7f162d3-kube-api-access-sstrs\") pod \"nova-cell0-9a3a-account-create-s7vqr\" (UID: \"b82277a9-e2eb-45db-a8ac-ae5cd7f162d3\") " pod="openstack/nova-cell0-9a3a-account-create-s7vqr" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.251197 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82277a9-e2eb-45db-a8ac-ae5cd7f162d3-operator-scripts\") pod \"nova-cell0-9a3a-account-create-s7vqr\" (UID: \"b82277a9-e2eb-45db-a8ac-ae5cd7f162d3\") " pod="openstack/nova-cell0-9a3a-account-create-s7vqr" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.251587 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms4s5\" (UniqueName: \"kubernetes.io/projected/4cbed283-5e56-4d9b-b749-ab8b4808834e-kube-api-access-ms4s5\") pod \"nova-cell1-db-create-dpcmk\" (UID: \"4cbed283-5e56-4d9b-b749-ab8b4808834e\") " pod="openstack/nova-cell1-db-create-dpcmk" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.256086 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-df70-account-create-l928f" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.265296 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.321516 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-dbef-account-create-mt2f4"] Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.323246 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-dbef-account-create-mt2f4" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.332030 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.339806 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-dbef-account-create-mt2f4"] Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.352649 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms4s5\" (UniqueName: \"kubernetes.io/projected/4cbed283-5e56-4d9b-b749-ab8b4808834e-kube-api-access-ms4s5\") pod \"nova-cell1-db-create-dpcmk\" (UID: \"4cbed283-5e56-4d9b-b749-ab8b4808834e\") " pod="openstack/nova-cell1-db-create-dpcmk" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.352692 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cbed283-5e56-4d9b-b749-ab8b4808834e-operator-scripts\") pod \"nova-cell1-db-create-dpcmk\" (UID: \"4cbed283-5e56-4d9b-b749-ab8b4808834e\") " pod="openstack/nova-cell1-db-create-dpcmk" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.352721 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sstrs\" (UniqueName: \"kubernetes.io/projected/b82277a9-e2eb-45db-a8ac-ae5cd7f162d3-kube-api-access-sstrs\") pod \"nova-cell0-9a3a-account-create-s7vqr\" (UID: \"b82277a9-e2eb-45db-a8ac-ae5cd7f162d3\") " pod="openstack/nova-cell0-9a3a-account-create-s7vqr" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.352782 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82277a9-e2eb-45db-a8ac-ae5cd7f162d3-operator-scripts\") pod \"nova-cell0-9a3a-account-create-s7vqr\" (UID: \"b82277a9-e2eb-45db-a8ac-ae5cd7f162d3\") " pod="openstack/nova-cell0-9a3a-account-create-s7vqr" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.354127 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cbed283-5e56-4d9b-b749-ab8b4808834e-operator-scripts\") pod \"nova-cell1-db-create-dpcmk\" (UID: \"4cbed283-5e56-4d9b-b749-ab8b4808834e\") " pod="openstack/nova-cell1-db-create-dpcmk" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.354165 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82277a9-e2eb-45db-a8ac-ae5cd7f162d3-operator-scripts\") pod \"nova-cell0-9a3a-account-create-s7vqr\" (UID: \"b82277a9-e2eb-45db-a8ac-ae5cd7f162d3\") " pod="openstack/nova-cell0-9a3a-account-create-s7vqr" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.372991 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms4s5\" (UniqueName: \"kubernetes.io/projected/4cbed283-5e56-4d9b-b749-ab8b4808834e-kube-api-access-ms4s5\") pod \"nova-cell1-db-create-dpcmk\" (UID: \"4cbed283-5e56-4d9b-b749-ab8b4808834e\") " pod="openstack/nova-cell1-db-create-dpcmk" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.373809 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sstrs\" (UniqueName: \"kubernetes.io/projected/b82277a9-e2eb-45db-a8ac-ae5cd7f162d3-kube-api-access-sstrs\") pod \"nova-cell0-9a3a-account-create-s7vqr\" (UID: \"b82277a9-e2eb-45db-a8ac-ae5cd7f162d3\") " pod="openstack/nova-cell0-9a3a-account-create-s7vqr" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.390122 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.454090 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56abad80-cda8-4d71-a70f-4761414adb87-operator-scripts\") pod \"nova-cell1-dbef-account-create-mt2f4\" (UID: \"56abad80-cda8-4d71-a70f-4761414adb87\") " pod="openstack/nova-cell1-dbef-account-create-mt2f4" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.454234 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp92k\" (UniqueName: \"kubernetes.io/projected/56abad80-cda8-4d71-a70f-4761414adb87-kube-api-access-wp92k\") pod \"nova-cell1-dbef-account-create-mt2f4\" (UID: \"56abad80-cda8-4d71-a70f-4761414adb87\") " pod="openstack/nova-cell1-dbef-account-create-mt2f4" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.461758 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dpcmk" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.536124 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9a3a-account-create-s7vqr" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.555924 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56abad80-cda8-4d71-a70f-4761414adb87-operator-scripts\") pod \"nova-cell1-dbef-account-create-mt2f4\" (UID: \"56abad80-cda8-4d71-a70f-4761414adb87\") " pod="openstack/nova-cell1-dbef-account-create-mt2f4" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.556024 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp92k\" (UniqueName: \"kubernetes.io/projected/56abad80-cda8-4d71-a70f-4761414adb87-kube-api-access-wp92k\") pod \"nova-cell1-dbef-account-create-mt2f4\" (UID: \"56abad80-cda8-4d71-a70f-4761414adb87\") " pod="openstack/nova-cell1-dbef-account-create-mt2f4" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.556825 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56abad80-cda8-4d71-a70f-4761414adb87-operator-scripts\") pod \"nova-cell1-dbef-account-create-mt2f4\" (UID: \"56abad80-cda8-4d71-a70f-4761414adb87\") " pod="openstack/nova-cell1-dbef-account-create-mt2f4" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.574757 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp92k\" (UniqueName: \"kubernetes.io/projected/56abad80-cda8-4d71-a70f-4761414adb87-kube-api-access-wp92k\") pod \"nova-cell1-dbef-account-create-mt2f4\" (UID: \"56abad80-cda8-4d71-a70f-4761414adb87\") " pod="openstack/nova-cell1-dbef-account-create-mt2f4" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.655830 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-dbef-account-create-mt2f4" Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.684981 5043 generic.go:334] "Generic (PLEG): container finished" podID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerID="6872a52a7564973bb862d9ea40d093259c4d2680eb90edcbd4bc429e0f2e248a" exitCode=0 Nov 25 07:34:42 crc kubenswrapper[5043]: I1125 07:34:42.685031 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f092b8e-ae0d-4b52-ac0d-323473fceb5e","Type":"ContainerDied","Data":"6872a52a7564973bb862d9ea40d093259c4d2680eb90edcbd4bc429e0f2e248a"} Nov 25 07:34:43 crc kubenswrapper[5043]: I1125 07:34:43.695130 5043 generic.go:334] "Generic (PLEG): container finished" podID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerID="4319d22e85ba6a8f91458a4d12bbb6ca13bf354322efa800a1ae699390708d5d" exitCode=0 Nov 25 07:34:43 crc kubenswrapper[5043]: I1125 07:34:43.695237 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f092b8e-ae0d-4b52-ac0d-323473fceb5e","Type":"ContainerDied","Data":"4319d22e85ba6a8f91458a4d12bbb6ca13bf354322efa800a1ae699390708d5d"} Nov 25 07:34:45 crc kubenswrapper[5043]: I1125 07:34:45.457318 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-586d64c99c-q5jk2" Nov 25 07:34:45 crc kubenswrapper[5043]: I1125 07:34:45.513795 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5985cc949b-rw6ms"] Nov 25 07:34:45 crc kubenswrapper[5043]: I1125 07:34:45.514048 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5985cc949b-rw6ms" podUID="292b02ee-fd80-4582-a1fa-ef9aa27c941c" containerName="neutron-api" containerID="cri-o://8d4e5c2f293d4882994ed680c131bacb731037b791a43f350d297b37567ca96b" gracePeriod=30 Nov 25 07:34:45 crc kubenswrapper[5043]: I1125 07:34:45.514505 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5985cc949b-rw6ms" podUID="292b02ee-fd80-4582-a1fa-ef9aa27c941c" containerName="neutron-httpd" containerID="cri-o://66e337fda13ba9e0a42874c1e4e4fe0fefefa6d6826e67bc20cc96b283c06eb0" gracePeriod=30 Nov 25 07:34:45 crc kubenswrapper[5043]: I1125 07:34:45.886210 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-b5bc7cfb-2sfts" podUID="9955ab7e-1d74-461a-a9b2-73e9f82d48fe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Nov 25 07:34:46 crc kubenswrapper[5043]: I1125 07:34:46.735575 5043 generic.go:334] "Generic (PLEG): container finished" podID="292b02ee-fd80-4582-a1fa-ef9aa27c941c" containerID="66e337fda13ba9e0a42874c1e4e4fe0fefefa6d6826e67bc20cc96b283c06eb0" exitCode=0 Nov 25 07:34:46 crc kubenswrapper[5043]: I1125 07:34:46.735633 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5985cc949b-rw6ms" event={"ID":"292b02ee-fd80-4582-a1fa-ef9aa27c941c","Type":"ContainerDied","Data":"66e337fda13ba9e0a42874c1e4e4fe0fefefa6d6826e67bc20cc96b283c06eb0"} Nov 25 07:34:47 crc kubenswrapper[5043]: I1125 07:34:47.276429 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:34:47 crc kubenswrapper[5043]: I1125 07:34:47.276487 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:34:47 crc kubenswrapper[5043]: I1125 07:34:47.313995 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:34:47 crc kubenswrapper[5043]: I1125 07:34:47.314228 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="4c5266b5-29ce-457c-b2e3-bbfab352768f" containerName="kube-state-metrics" containerID="cri-o://8771dc0e200faac46b9859b23bc4c875acc2323edddb5ffdfee09961c0d5d9fd" gracePeriod=30 Nov 25 07:34:47 crc kubenswrapper[5043]: I1125 07:34:47.783095 5043 generic.go:334] "Generic (PLEG): container finished" podID="4c5266b5-29ce-457c-b2e3-bbfab352768f" containerID="8771dc0e200faac46b9859b23bc4c875acc2323edddb5ffdfee09961c0d5d9fd" exitCode=2 Nov 25 07:34:47 crc kubenswrapper[5043]: I1125 07:34:47.783317 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4c5266b5-29ce-457c-b2e3-bbfab352768f","Type":"ContainerDied","Data":"8771dc0e200faac46b9859b23bc4c875acc2323edddb5ffdfee09961c0d5d9fd"} Nov 25 07:34:47 crc kubenswrapper[5043]: I1125 07:34:47.964960 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:34:47 crc kubenswrapper[5043]: I1125 07:34:47.984564 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.085846 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-sg-core-conf-yaml\") pod \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.086011 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-config-data\") pod \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.086098 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddgrp\" (UniqueName: \"kubernetes.io/projected/4c5266b5-29ce-457c-b2e3-bbfab352768f-kube-api-access-ddgrp\") pod \"4c5266b5-29ce-457c-b2e3-bbfab352768f\" (UID: \"4c5266b5-29ce-457c-b2e3-bbfab352768f\") " Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.086146 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-log-httpd\") pod \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.086216 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-run-httpd\") pod \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.086245 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56r22\" (UniqueName: \"kubernetes.io/projected/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-kube-api-access-56r22\") pod \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.086294 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-scripts\") pod \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.086340 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-combined-ca-bundle\") pod \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\" (UID: \"3f092b8e-ae0d-4b52-ac0d-323473fceb5e\") " Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.088978 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3f092b8e-ae0d-4b52-ac0d-323473fceb5e" (UID: "3f092b8e-ae0d-4b52-ac0d-323473fceb5e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.089025 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3f092b8e-ae0d-4b52-ac0d-323473fceb5e" (UID: "3f092b8e-ae0d-4b52-ac0d-323473fceb5e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.096076 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-kube-api-access-56r22" (OuterVolumeSpecName: "kube-api-access-56r22") pod "3f092b8e-ae0d-4b52-ac0d-323473fceb5e" (UID: "3f092b8e-ae0d-4b52-ac0d-323473fceb5e"). InnerVolumeSpecName "kube-api-access-56r22". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.104638 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c5266b5-29ce-457c-b2e3-bbfab352768f-kube-api-access-ddgrp" (OuterVolumeSpecName: "kube-api-access-ddgrp") pod "4c5266b5-29ce-457c-b2e3-bbfab352768f" (UID: "4c5266b5-29ce-457c-b2e3-bbfab352768f"). InnerVolumeSpecName "kube-api-access-ddgrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.113958 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-scripts" (OuterVolumeSpecName: "scripts") pod "3f092b8e-ae0d-4b52-ac0d-323473fceb5e" (UID: "3f092b8e-ae0d-4b52-ac0d-323473fceb5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.144554 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3f092b8e-ae0d-4b52-ac0d-323473fceb5e" (UID: "3f092b8e-ae0d-4b52-ac0d-323473fceb5e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.192689 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.193028 5043 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.193039 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddgrp\" (UniqueName: \"kubernetes.io/projected/4c5266b5-29ce-457c-b2e3-bbfab352768f-kube-api-access-ddgrp\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.193048 5043 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.193059 5043 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.193069 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56r22\" (UniqueName: \"kubernetes.io/projected/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-kube-api-access-56r22\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.200266 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f092b8e-ae0d-4b52-ac0d-323473fceb5e" (UID: "3f092b8e-ae0d-4b52-ac0d-323473fceb5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.217143 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-config-data" (OuterVolumeSpecName: "config-data") pod "3f092b8e-ae0d-4b52-ac0d-323473fceb5e" (UID: "3f092b8e-ae0d-4b52-ac0d-323473fceb5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.294836 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.294865 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f092b8e-ae0d-4b52-ac0d-323473fceb5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.518762 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-df70-account-create-l928f"] Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.546773 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.554984 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-dbef-account-create-mt2f4"] Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.661568 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pf25q"] Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.686048 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ngdf8"] Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.686123 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9a3a-account-create-s7vqr"] Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.819150 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dpcmk"] Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.882977 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-df70-account-create-l928f" event={"ID":"93c8ab01-08e3-4b78-a215-c1382e53c98f","Type":"ContainerStarted","Data":"95470002a54f4a3296e7fd5f22dcea4cd5adc553ac1855021cf584317f4a9c20"} Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.889687 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-dbef-account-create-mt2f4" event={"ID":"56abad80-cda8-4d71-a70f-4761414adb87","Type":"ContainerStarted","Data":"0c1df833c97c48b7f2c805fd5eb3858385e93aada08a8dcbe17ea5f44e232d48"} Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.892300 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ngdf8" event={"ID":"e9c4413b-cd74-4ad4-978b-089edd47d7b3","Type":"ContainerStarted","Data":"cf342403bc7a7733de110e4f56c07c31be0e6909958a149e12006e554a4b5d43"} Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.914961 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9bcf9848-5bdf-4760-829f-a92e4015ab70","Type":"ContainerStarted","Data":"240493ad736869157a76733fd09621aeba532fce6b2e99703267a4ff2ebab014"} Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.932491 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.53857634 podStartE2EDuration="13.932476118s" podCreationTimestamp="2025-11-25 07:34:35 +0000 UTC" firstStartedPulling="2025-11-25 07:34:36.234458777 +0000 UTC m=+1140.402654498" lastFinishedPulling="2025-11-25 07:34:47.628358555 +0000 UTC m=+1151.796554276" observedRunningTime="2025-11-25 07:34:48.931176453 +0000 UTC m=+1153.099372184" watchObservedRunningTime="2025-11-25 07:34:48.932476118 +0000 UTC m=+1153.100671829" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.957152 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pf25q" event={"ID":"3801a63c-ffa6-49c8-8bf9-2bafdad18466","Type":"ContainerStarted","Data":"8af0557b3d684a7f6a85e079291e019d8d1bf30b0ffb028a4bfea3fb9ae03735"} Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.959301 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4c5266b5-29ce-457c-b2e3-bbfab352768f","Type":"ContainerDied","Data":"e64269df9dc9ff35988c93971a1a113b35103f08ff45f1a16140643def4e0e2d"} Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.959335 5043 scope.go:117] "RemoveContainer" containerID="8771dc0e200faac46b9859b23bc4c875acc2323edddb5ffdfee09961c0d5d9fd" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.959454 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.977078 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9a3a-account-create-s7vqr" event={"ID":"b82277a9-e2eb-45db-a8ac-ae5cd7f162d3","Type":"ContainerStarted","Data":"d2092c6cff849f959c12624bd667511c73c74422437db2c70e983248811b9928"} Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.994773 5043 generic.go:334] "Generic (PLEG): container finished" podID="292b02ee-fd80-4582-a1fa-ef9aa27c941c" containerID="8d4e5c2f293d4882994ed680c131bacb731037b791a43f350d297b37567ca96b" exitCode=0 Nov 25 07:34:48 crc kubenswrapper[5043]: I1125 07:34:48.994880 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5985cc949b-rw6ms" event={"ID":"292b02ee-fd80-4582-a1fa-ef9aa27c941c","Type":"ContainerDied","Data":"8d4e5c2f293d4882994ed680c131bacb731037b791a43f350d297b37567ca96b"} Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:48.998814 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.016488 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.034788 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:34:49 crc kubenswrapper[5043]: E1125 07:34:49.035360 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5266b5-29ce-457c-b2e3-bbfab352768f" containerName="kube-state-metrics" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.035377 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5266b5-29ce-457c-b2e3-bbfab352768f" containerName="kube-state-metrics" Nov 25 07:34:49 crc kubenswrapper[5043]: E1125 07:34:49.035389 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerName="ceilometer-central-agent" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.035396 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerName="ceilometer-central-agent" Nov 25 07:34:49 crc kubenswrapper[5043]: E1125 07:34:49.035415 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerName="ceilometer-notification-agent" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.035422 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerName="ceilometer-notification-agent" Nov 25 07:34:49 crc kubenswrapper[5043]: E1125 07:34:49.035436 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerName="proxy-httpd" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.035442 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerName="proxy-httpd" Nov 25 07:34:49 crc kubenswrapper[5043]: E1125 07:34:49.035461 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerName="sg-core" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.035467 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerName="sg-core" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.035649 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerName="sg-core" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.035663 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerName="proxy-httpd" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.035676 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerName="ceilometer-central-agent" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.035687 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" containerName="ceilometer-notification-agent" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.035698 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c5266b5-29ce-457c-b2e3-bbfab352768f" containerName="kube-state-metrics" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.036236 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.039040 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.039177 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f092b8e-ae0d-4b52-ac0d-323473fceb5e","Type":"ContainerDied","Data":"705b8aa36e01ad10c17ad9615b1cd274605ed598a739a22dd9ee2a3db312c888"} Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.039214 5043 scope.go:117] "RemoveContainer" containerID="4c754e9d66c57aae5fd89388556b160c4afc152c85dc3d6800e687d4e890e4ec" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.039341 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.039371 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.048248 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.049262 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c99eb43d-cf17-44a4-beeb-f5222c978039","Type":"ContainerStarted","Data":"509ec63c4cee26686c54eee0f1ee7981f65bfa05782d494118028d44f39b778d"} Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.081511 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.102273 5043 scope.go:117] "RemoveContainer" containerID="466a080090b48a3fefe6fb076d14921ca4061e98889272038841578991ec2c7f" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.115515 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.121114 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.125404 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c937dff6-4203-455c-b07a-ec16e23c746f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c937dff6-4203-455c-b07a-ec16e23c746f\") " pod="openstack/kube-state-metrics-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.125760 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjrvx\" (UniqueName: \"kubernetes.io/projected/c937dff6-4203-455c-b07a-ec16e23c746f-kube-api-access-kjrvx\") pod \"kube-state-metrics-0\" (UID: \"c937dff6-4203-455c-b07a-ec16e23c746f\") " pod="openstack/kube-state-metrics-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.125942 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c937dff6-4203-455c-b07a-ec16e23c746f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c937dff6-4203-455c-b07a-ec16e23c746f\") " pod="openstack/kube-state-metrics-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.127349 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c937dff6-4203-455c-b07a-ec16e23c746f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c937dff6-4203-455c-b07a-ec16e23c746f\") " pod="openstack/kube-state-metrics-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.147083 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:49 crc kubenswrapper[5043]: E1125 07:34:49.147883 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292b02ee-fd80-4582-a1fa-ef9aa27c941c" containerName="neutron-httpd" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.148005 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="292b02ee-fd80-4582-a1fa-ef9aa27c941c" containerName="neutron-httpd" Nov 25 07:34:49 crc kubenswrapper[5043]: E1125 07:34:49.148119 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292b02ee-fd80-4582-a1fa-ef9aa27c941c" containerName="neutron-api" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.148234 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="292b02ee-fd80-4582-a1fa-ef9aa27c941c" containerName="neutron-api" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.148566 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="292b02ee-fd80-4582-a1fa-ef9aa27c941c" containerName="neutron-httpd" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.148704 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="292b02ee-fd80-4582-a1fa-ef9aa27c941c" containerName="neutron-api" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.152978 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.154875 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.158205 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.158656 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.159541 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.164110 5043 scope.go:117] "RemoveContainer" containerID="4319d22e85ba6a8f91458a4d12bbb6ca13bf354322efa800a1ae699390708d5d" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.213272 5043 scope.go:117] "RemoveContainer" containerID="6872a52a7564973bb862d9ea40d093259c4d2680eb90edcbd4bc429e0f2e248a" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.231750 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-combined-ca-bundle\") pod \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.231825 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-httpd-config\") pod \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.231856 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p42v5\" (UniqueName: \"kubernetes.io/projected/292b02ee-fd80-4582-a1fa-ef9aa27c941c-kube-api-access-p42v5\") pod \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.231943 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-ovndb-tls-certs\") pod \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.232021 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-config\") pod \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\" (UID: \"292b02ee-fd80-4582-a1fa-ef9aa27c941c\") " Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.232288 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c937dff6-4203-455c-b07a-ec16e23c746f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c937dff6-4203-455c-b07a-ec16e23c746f\") " pod="openstack/kube-state-metrics-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.232381 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c937dff6-4203-455c-b07a-ec16e23c746f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c937dff6-4203-455c-b07a-ec16e23c746f\") " pod="openstack/kube-state-metrics-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.232424 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjrvx\" (UniqueName: \"kubernetes.io/projected/c937dff6-4203-455c-b07a-ec16e23c746f-kube-api-access-kjrvx\") pod \"kube-state-metrics-0\" (UID: \"c937dff6-4203-455c-b07a-ec16e23c746f\") " pod="openstack/kube-state-metrics-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.232464 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c937dff6-4203-455c-b07a-ec16e23c746f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c937dff6-4203-455c-b07a-ec16e23c746f\") " pod="openstack/kube-state-metrics-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.238960 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c937dff6-4203-455c-b07a-ec16e23c746f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c937dff6-4203-455c-b07a-ec16e23c746f\") " pod="openstack/kube-state-metrics-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.242513 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "292b02ee-fd80-4582-a1fa-ef9aa27c941c" (UID: "292b02ee-fd80-4582-a1fa-ef9aa27c941c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.252727 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/292b02ee-fd80-4582-a1fa-ef9aa27c941c-kube-api-access-p42v5" (OuterVolumeSpecName: "kube-api-access-p42v5") pod "292b02ee-fd80-4582-a1fa-ef9aa27c941c" (UID: "292b02ee-fd80-4582-a1fa-ef9aa27c941c"). InnerVolumeSpecName "kube-api-access-p42v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.256421 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c937dff6-4203-455c-b07a-ec16e23c746f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c937dff6-4203-455c-b07a-ec16e23c746f\") " pod="openstack/kube-state-metrics-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.257889 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c937dff6-4203-455c-b07a-ec16e23c746f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c937dff6-4203-455c-b07a-ec16e23c746f\") " pod="openstack/kube-state-metrics-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.262318 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjrvx\" (UniqueName: \"kubernetes.io/projected/c937dff6-4203-455c-b07a-ec16e23c746f-kube-api-access-kjrvx\") pod \"kube-state-metrics-0\" (UID: \"c937dff6-4203-455c-b07a-ec16e23c746f\") " pod="openstack/kube-state-metrics-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.303443 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "292b02ee-fd80-4582-a1fa-ef9aa27c941c" (UID: "292b02ee-fd80-4582-a1fa-ef9aa27c941c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.306012 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-config" (OuterVolumeSpecName: "config") pod "292b02ee-fd80-4582-a1fa-ef9aa27c941c" (UID: "292b02ee-fd80-4582-a1fa-ef9aa27c941c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.338142 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd61e257-d1f2-4d23-a148-9196f3d364b0-log-httpd\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.338197 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-scripts\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.338221 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-config-data\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.338252 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6t7k\" (UniqueName: \"kubernetes.io/projected/bd61e257-d1f2-4d23-a148-9196f3d364b0-kube-api-access-k6t7k\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.338267 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd61e257-d1f2-4d23-a148-9196f3d364b0-run-httpd\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.338283 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.338321 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.338361 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.338476 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.338488 5043 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.338497 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p42v5\" (UniqueName: \"kubernetes.io/projected/292b02ee-fd80-4582-a1fa-ef9aa27c941c-kube-api-access-p42v5\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.338507 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.357784 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "292b02ee-fd80-4582-a1fa-ef9aa27c941c" (UID: "292b02ee-fd80-4582-a1fa-ef9aa27c941c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.377751 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.440478 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6t7k\" (UniqueName: \"kubernetes.io/projected/bd61e257-d1f2-4d23-a148-9196f3d364b0-kube-api-access-k6t7k\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.440774 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd61e257-d1f2-4d23-a148-9196f3d364b0-run-httpd\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.441449 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.441396 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd61e257-d1f2-4d23-a148-9196f3d364b0-run-httpd\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.442242 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.442420 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.442708 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd61e257-d1f2-4d23-a148-9196f3d364b0-log-httpd\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.442824 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-scripts\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.443217 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-config-data\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.443412 5043 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/292b02ee-fd80-4582-a1fa-ef9aa27c941c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.444156 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd61e257-d1f2-4d23-a148-9196f3d364b0-log-httpd\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.446166 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.447542 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.448227 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.453664 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-config-data\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.454546 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-scripts\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.464947 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6t7k\" (UniqueName: \"kubernetes.io/projected/bd61e257-d1f2-4d23-a148-9196f3d364b0-kube-api-access-k6t7k\") pod \"ceilometer-0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.495051 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.858987 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 07:34:49 crc kubenswrapper[5043]: I1125 07:34:49.986217 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:49 crc kubenswrapper[5043]: W1125 07:34:49.993307 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd61e257_d1f2_4d23_a148_9196f3d364b0.slice/crio-9219ebf18270772790de66c8a7bec21cd9b1a967eca4d8d2c49aef279afd3028 WatchSource:0}: Error finding container 9219ebf18270772790de66c8a7bec21cd9b1a967eca4d8d2c49aef279afd3028: Status 404 returned error can't find the container with id 9219ebf18270772790de66c8a7bec21cd9b1a967eca4d8d2c49aef279afd3028 Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.058345 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd61e257-d1f2-4d23-a148-9196f3d364b0","Type":"ContainerStarted","Data":"9219ebf18270772790de66c8a7bec21cd9b1a967eca4d8d2c49aef279afd3028"} Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.065032 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c99eb43d-cf17-44a4-beeb-f5222c978039","Type":"ContainerStarted","Data":"8f8308b269616cf5617b43b73a631f951da357b5fdaad2d60706627f85f6cc70"} Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.067349 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5985cc949b-rw6ms" event={"ID":"292b02ee-fd80-4582-a1fa-ef9aa27c941c","Type":"ContainerDied","Data":"65f7d180d450adbd6e76bb8c60190d28bf35f3790a8c0dc8f89ff373f1f330a7"} Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.067445 5043 scope.go:117] "RemoveContainer" containerID="66e337fda13ba9e0a42874c1e4e4fe0fefefa6d6826e67bc20cc96b283c06eb0" Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.067525 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5985cc949b-rw6ms" Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.077422 5043 generic.go:334] "Generic (PLEG): container finished" podID="93c8ab01-08e3-4b78-a215-c1382e53c98f" containerID="f3667d462c3b265f14592ee818bc569926a74ca2090000adad64c17fcd589ae4" exitCode=0 Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.077517 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-df70-account-create-l928f" event={"ID":"93c8ab01-08e3-4b78-a215-c1382e53c98f","Type":"ContainerDied","Data":"f3667d462c3b265f14592ee818bc569926a74ca2090000adad64c17fcd589ae4"} Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.102301 5043 generic.go:334] "Generic (PLEG): container finished" podID="56abad80-cda8-4d71-a70f-4761414adb87" containerID="a69c4df5031c9410fb0833af1d4c23d5669a596b406c0a36fbbfeb145d1f03d6" exitCode=0 Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.102336 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-dbef-account-create-mt2f4" event={"ID":"56abad80-cda8-4d71-a70f-4761414adb87","Type":"ContainerDied","Data":"a69c4df5031c9410fb0833af1d4c23d5669a596b406c0a36fbbfeb145d1f03d6"} Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.118876 5043 scope.go:117] "RemoveContainer" containerID="8d4e5c2f293d4882994ed680c131bacb731037b791a43f350d297b37567ca96b" Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.121720 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c937dff6-4203-455c-b07a-ec16e23c746f","Type":"ContainerStarted","Data":"0f11f6054305f1a2014f3d477021406414653564572366fb0839df368d2374f7"} Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.126704 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5985cc949b-rw6ms"] Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.128211 5043 generic.go:334] "Generic (PLEG): container finished" podID="b82277a9-e2eb-45db-a8ac-ae5cd7f162d3" containerID="c8dd027dacbede12f0f851b3f658feeb93cff2a13aa7821f021c4cc7efe77275" exitCode=0 Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.128265 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9a3a-account-create-s7vqr" event={"ID":"b82277a9-e2eb-45db-a8ac-ae5cd7f162d3","Type":"ContainerDied","Data":"c8dd027dacbede12f0f851b3f658feeb93cff2a13aa7821f021c4cc7efe77275"} Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.133963 5043 generic.go:334] "Generic (PLEG): container finished" podID="4cbed283-5e56-4d9b-b749-ab8b4808834e" containerID="d376bfc7f206cb3842ae665d04879f95d43a848cf9eebe10d45ad2d3caf86429" exitCode=0 Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.134114 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dpcmk" event={"ID":"4cbed283-5e56-4d9b-b749-ab8b4808834e","Type":"ContainerDied","Data":"d376bfc7f206cb3842ae665d04879f95d43a848cf9eebe10d45ad2d3caf86429"} Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.134139 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dpcmk" event={"ID":"4cbed283-5e56-4d9b-b749-ab8b4808834e","Type":"ContainerStarted","Data":"38b661634079e79427f8c031ad28ee2e354c242c4e034a44b21755a959422197"} Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.139263 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5985cc949b-rw6ms"] Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.147095 5043 generic.go:334] "Generic (PLEG): container finished" podID="3801a63c-ffa6-49c8-8bf9-2bafdad18466" containerID="78ce028904d91977d0b862a2d14a1e594f3cc2c5c321f535c8aa761cfa9a50b5" exitCode=0 Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.147200 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pf25q" event={"ID":"3801a63c-ffa6-49c8-8bf9-2bafdad18466","Type":"ContainerDied","Data":"78ce028904d91977d0b862a2d14a1e594f3cc2c5c321f535c8aa761cfa9a50b5"} Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.148847 5043 generic.go:334] "Generic (PLEG): container finished" podID="e9c4413b-cd74-4ad4-978b-089edd47d7b3" containerID="05c38c091b0b299533db2b2b9e2e604db165bfb1228d71e18ec5adc8b5bf51d0" exitCode=0 Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.149254 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ngdf8" event={"ID":"e9c4413b-cd74-4ad4-978b-089edd47d7b3","Type":"ContainerDied","Data":"05c38c091b0b299533db2b2b9e2e604db165bfb1228d71e18ec5adc8b5bf51d0"} Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.319622 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.973803 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="292b02ee-fd80-4582-a1fa-ef9aa27c941c" path="/var/lib/kubelet/pods/292b02ee-fd80-4582-a1fa-ef9aa27c941c/volumes" Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.975909 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f092b8e-ae0d-4b52-ac0d-323473fceb5e" path="/var/lib/kubelet/pods/3f092b8e-ae0d-4b52-ac0d-323473fceb5e/volumes" Nov 25 07:34:50 crc kubenswrapper[5043]: I1125 07:34:50.976510 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c5266b5-29ce-457c-b2e3-bbfab352768f" path="/var/lib/kubelet/pods/4c5266b5-29ce-457c-b2e3-bbfab352768f/volumes" Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.157786 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd61e257-d1f2-4d23-a148-9196f3d364b0","Type":"ContainerStarted","Data":"75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df"} Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.161534 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c937dff6-4203-455c-b07a-ec16e23c746f","Type":"ContainerStarted","Data":"8ddb83870375f48375f48b386063ff8de6ab9b6ac9057b75f3108e841157a4b8"} Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.161615 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.164466 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c99eb43d-cf17-44a4-beeb-f5222c978039","Type":"ContainerStarted","Data":"307e55ef1be1bfd7d39c0830290c472f53e2bcc91b0fab779b36a11f39852031"} Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.213234 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.847518662 podStartE2EDuration="3.213219859s" podCreationTimestamp="2025-11-25 07:34:48 +0000 UTC" firstStartedPulling="2025-11-25 07:34:49.863595367 +0000 UTC m=+1154.031791088" lastFinishedPulling="2025-11-25 07:34:50.229296564 +0000 UTC m=+1154.397492285" observedRunningTime="2025-11-25 07:34:51.19086508 +0000 UTC m=+1155.359060811" watchObservedRunningTime="2025-11-25 07:34:51.213219859 +0000 UTC m=+1155.381415580" Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.219832 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=12.219813276 podStartE2EDuration="12.219813276s" podCreationTimestamp="2025-11-25 07:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:34:51.211189285 +0000 UTC m=+1155.379385006" watchObservedRunningTime="2025-11-25 07:34:51.219813276 +0000 UTC m=+1155.388008997" Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.649659 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9a3a-account-create-s7vqr" Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.801061 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sstrs\" (UniqueName: \"kubernetes.io/projected/b82277a9-e2eb-45db-a8ac-ae5cd7f162d3-kube-api-access-sstrs\") pod \"b82277a9-e2eb-45db-a8ac-ae5cd7f162d3\" (UID: \"b82277a9-e2eb-45db-a8ac-ae5cd7f162d3\") " Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.801280 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82277a9-e2eb-45db-a8ac-ae5cd7f162d3-operator-scripts\") pod \"b82277a9-e2eb-45db-a8ac-ae5cd7f162d3\" (UID: \"b82277a9-e2eb-45db-a8ac-ae5cd7f162d3\") " Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.802374 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82277a9-e2eb-45db-a8ac-ae5cd7f162d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b82277a9-e2eb-45db-a8ac-ae5cd7f162d3" (UID: "b82277a9-e2eb-45db-a8ac-ae5cd7f162d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.813491 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82277a9-e2eb-45db-a8ac-ae5cd7f162d3-kube-api-access-sstrs" (OuterVolumeSpecName: "kube-api-access-sstrs") pod "b82277a9-e2eb-45db-a8ac-ae5cd7f162d3" (UID: "b82277a9-e2eb-45db-a8ac-ae5cd7f162d3"). InnerVolumeSpecName "kube-api-access-sstrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.903296 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82277a9-e2eb-45db-a8ac-ae5cd7f162d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.903339 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sstrs\" (UniqueName: \"kubernetes.io/projected/b82277a9-e2eb-45db-a8ac-ae5cd7f162d3-kube-api-access-sstrs\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.912356 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dpcmk" Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.916859 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-dbef-account-create-mt2f4" Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.935916 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-df70-account-create-l928f" Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.944328 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pf25q" Nov 25 07:34:51 crc kubenswrapper[5043]: I1125 07:34:51.960087 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ngdf8" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.003855 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56abad80-cda8-4d71-a70f-4761414adb87-operator-scripts\") pod \"56abad80-cda8-4d71-a70f-4761414adb87\" (UID: \"56abad80-cda8-4d71-a70f-4761414adb87\") " Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.003941 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms4s5\" (UniqueName: \"kubernetes.io/projected/4cbed283-5e56-4d9b-b749-ab8b4808834e-kube-api-access-ms4s5\") pod \"4cbed283-5e56-4d9b-b749-ab8b4808834e\" (UID: \"4cbed283-5e56-4d9b-b749-ab8b4808834e\") " Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.004007 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9pdq\" (UniqueName: \"kubernetes.io/projected/3801a63c-ffa6-49c8-8bf9-2bafdad18466-kube-api-access-h9pdq\") pod \"3801a63c-ffa6-49c8-8bf9-2bafdad18466\" (UID: \"3801a63c-ffa6-49c8-8bf9-2bafdad18466\") " Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.004054 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp92k\" (UniqueName: \"kubernetes.io/projected/56abad80-cda8-4d71-a70f-4761414adb87-kube-api-access-wp92k\") pod \"56abad80-cda8-4d71-a70f-4761414adb87\" (UID: \"56abad80-cda8-4d71-a70f-4761414adb87\") " Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.004098 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93c8ab01-08e3-4b78-a215-c1382e53c98f-operator-scripts\") pod \"93c8ab01-08e3-4b78-a215-c1382e53c98f\" (UID: \"93c8ab01-08e3-4b78-a215-c1382e53c98f\") " Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.004179 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzq5q\" (UniqueName: \"kubernetes.io/projected/93c8ab01-08e3-4b78-a215-c1382e53c98f-kube-api-access-xzq5q\") pod \"93c8ab01-08e3-4b78-a215-c1382e53c98f\" (UID: \"93c8ab01-08e3-4b78-a215-c1382e53c98f\") " Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.004212 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3801a63c-ffa6-49c8-8bf9-2bafdad18466-operator-scripts\") pod \"3801a63c-ffa6-49c8-8bf9-2bafdad18466\" (UID: \"3801a63c-ffa6-49c8-8bf9-2bafdad18466\") " Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.004244 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cbed283-5e56-4d9b-b749-ab8b4808834e-operator-scripts\") pod \"4cbed283-5e56-4d9b-b749-ab8b4808834e\" (UID: \"4cbed283-5e56-4d9b-b749-ab8b4808834e\") " Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.004955 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cbed283-5e56-4d9b-b749-ab8b4808834e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4cbed283-5e56-4d9b-b749-ab8b4808834e" (UID: "4cbed283-5e56-4d9b-b749-ab8b4808834e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.005281 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56abad80-cda8-4d71-a70f-4761414adb87-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56abad80-cda8-4d71-a70f-4761414adb87" (UID: "56abad80-cda8-4d71-a70f-4761414adb87"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.006779 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93c8ab01-08e3-4b78-a215-c1382e53c98f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93c8ab01-08e3-4b78-a215-c1382e53c98f" (UID: "93c8ab01-08e3-4b78-a215-c1382e53c98f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.009142 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cbed283-5e56-4d9b-b749-ab8b4808834e-kube-api-access-ms4s5" (OuterVolumeSpecName: "kube-api-access-ms4s5") pod "4cbed283-5e56-4d9b-b749-ab8b4808834e" (UID: "4cbed283-5e56-4d9b-b749-ab8b4808834e"). InnerVolumeSpecName "kube-api-access-ms4s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.011096 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c8ab01-08e3-4b78-a215-c1382e53c98f-kube-api-access-xzq5q" (OuterVolumeSpecName: "kube-api-access-xzq5q") pod "93c8ab01-08e3-4b78-a215-c1382e53c98f" (UID: "93c8ab01-08e3-4b78-a215-c1382e53c98f"). InnerVolumeSpecName "kube-api-access-xzq5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.011547 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3801a63c-ffa6-49c8-8bf9-2bafdad18466-kube-api-access-h9pdq" (OuterVolumeSpecName: "kube-api-access-h9pdq") pod "3801a63c-ffa6-49c8-8bf9-2bafdad18466" (UID: "3801a63c-ffa6-49c8-8bf9-2bafdad18466"). InnerVolumeSpecName "kube-api-access-h9pdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.014764 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3801a63c-ffa6-49c8-8bf9-2bafdad18466-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3801a63c-ffa6-49c8-8bf9-2bafdad18466" (UID: "3801a63c-ffa6-49c8-8bf9-2bafdad18466"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.015836 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56abad80-cda8-4d71-a70f-4761414adb87-kube-api-access-wp92k" (OuterVolumeSpecName: "kube-api-access-wp92k") pod "56abad80-cda8-4d71-a70f-4761414adb87" (UID: "56abad80-cda8-4d71-a70f-4761414adb87"). InnerVolumeSpecName "kube-api-access-wp92k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.106027 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9c4413b-cd74-4ad4-978b-089edd47d7b3-operator-scripts\") pod \"e9c4413b-cd74-4ad4-978b-089edd47d7b3\" (UID: \"e9c4413b-cd74-4ad4-978b-089edd47d7b3\") " Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.106080 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcfwb\" (UniqueName: \"kubernetes.io/projected/e9c4413b-cd74-4ad4-978b-089edd47d7b3-kube-api-access-wcfwb\") pod \"e9c4413b-cd74-4ad4-978b-089edd47d7b3\" (UID: \"e9c4413b-cd74-4ad4-978b-089edd47d7b3\") " Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.108743 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93c8ab01-08e3-4b78-a215-c1382e53c98f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.108772 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzq5q\" (UniqueName: \"kubernetes.io/projected/93c8ab01-08e3-4b78-a215-c1382e53c98f-kube-api-access-xzq5q\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.108784 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3801a63c-ffa6-49c8-8bf9-2bafdad18466-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.108795 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cbed283-5e56-4d9b-b749-ab8b4808834e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.108805 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56abad80-cda8-4d71-a70f-4761414adb87-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.108817 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms4s5\" (UniqueName: \"kubernetes.io/projected/4cbed283-5e56-4d9b-b749-ab8b4808834e-kube-api-access-ms4s5\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.108826 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9pdq\" (UniqueName: \"kubernetes.io/projected/3801a63c-ffa6-49c8-8bf9-2bafdad18466-kube-api-access-h9pdq\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.108836 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp92k\" (UniqueName: \"kubernetes.io/projected/56abad80-cda8-4d71-a70f-4761414adb87-kube-api-access-wp92k\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.109416 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9c4413b-cd74-4ad4-978b-089edd47d7b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9c4413b-cd74-4ad4-978b-089edd47d7b3" (UID: "e9c4413b-cd74-4ad4-978b-089edd47d7b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.113128 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c4413b-cd74-4ad4-978b-089edd47d7b3-kube-api-access-wcfwb" (OuterVolumeSpecName: "kube-api-access-wcfwb") pod "e9c4413b-cd74-4ad4-978b-089edd47d7b3" (UID: "e9c4413b-cd74-4ad4-978b-089edd47d7b3"). InnerVolumeSpecName "kube-api-access-wcfwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.173924 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ngdf8" event={"ID":"e9c4413b-cd74-4ad4-978b-089edd47d7b3","Type":"ContainerDied","Data":"cf342403bc7a7733de110e4f56c07c31be0e6909958a149e12006e554a4b5d43"} Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.173985 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf342403bc7a7733de110e4f56c07c31be0e6909958a149e12006e554a4b5d43" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.173957 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ngdf8" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.178789 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-df70-account-create-l928f" event={"ID":"93c8ab01-08e3-4b78-a215-c1382e53c98f","Type":"ContainerDied","Data":"95470002a54f4a3296e7fd5f22dcea4cd5adc553ac1855021cf584317f4a9c20"} Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.178826 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95470002a54f4a3296e7fd5f22dcea4cd5adc553ac1855021cf584317f4a9c20" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.178881 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-df70-account-create-l928f" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.183948 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-dbef-account-create-mt2f4" event={"ID":"56abad80-cda8-4d71-a70f-4761414adb87","Type":"ContainerDied","Data":"0c1df833c97c48b7f2c805fd5eb3858385e93aada08a8dcbe17ea5f44e232d48"} Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.183987 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c1df833c97c48b7f2c805fd5eb3858385e93aada08a8dcbe17ea5f44e232d48" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.184039 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-dbef-account-create-mt2f4" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.192549 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9a3a-account-create-s7vqr" event={"ID":"b82277a9-e2eb-45db-a8ac-ae5cd7f162d3","Type":"ContainerDied","Data":"d2092c6cff849f959c12624bd667511c73c74422437db2c70e983248811b9928"} Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.192592 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2092c6cff849f959c12624bd667511c73c74422437db2c70e983248811b9928" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.192678 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9a3a-account-create-s7vqr" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.199268 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dpcmk" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.203133 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dpcmk" event={"ID":"4cbed283-5e56-4d9b-b749-ab8b4808834e","Type":"ContainerDied","Data":"38b661634079e79427f8c031ad28ee2e354c242c4e034a44b21755a959422197"} Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.203165 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b661634079e79427f8c031ad28ee2e354c242c4e034a44b21755a959422197" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.211273 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9c4413b-cd74-4ad4-978b-089edd47d7b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.211299 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcfwb\" (UniqueName: \"kubernetes.io/projected/e9c4413b-cd74-4ad4-978b-089edd47d7b3-kube-api-access-wcfwb\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.218078 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pf25q" event={"ID":"3801a63c-ffa6-49c8-8bf9-2bafdad18466","Type":"ContainerDied","Data":"8af0557b3d684a7f6a85e079291e019d8d1bf30b0ffb028a4bfea3fb9ae03735"} Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.218114 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8af0557b3d684a7f6a85e079291e019d8d1bf30b0ffb028a4bfea3fb9ae03735" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.218197 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pf25q" Nov 25 07:34:52 crc kubenswrapper[5043]: I1125 07:34:52.222819 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd61e257-d1f2-4d23-a148-9196f3d364b0","Type":"ContainerStarted","Data":"bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553"} Nov 25 07:34:53 crc kubenswrapper[5043]: I1125 07:34:53.266342 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd61e257-d1f2-4d23-a148-9196f3d364b0","Type":"ContainerStarted","Data":"52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd"} Nov 25 07:34:54 crc kubenswrapper[5043]: I1125 07:34:54.280097 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd61e257-d1f2-4d23-a148-9196f3d364b0","Type":"ContainerStarted","Data":"f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e"} Nov 25 07:34:54 crc kubenswrapper[5043]: I1125 07:34:54.281324 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 07:34:54 crc kubenswrapper[5043]: I1125 07:34:54.280336 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerName="sg-core" containerID="cri-o://52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd" gracePeriod=30 Nov 25 07:34:54 crc kubenswrapper[5043]: I1125 07:34:54.280381 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerName="ceilometer-notification-agent" containerID="cri-o://bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553" gracePeriod=30 Nov 25 07:34:54 crc kubenswrapper[5043]: I1125 07:34:54.280394 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerName="proxy-httpd" containerID="cri-o://f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e" gracePeriod=30 Nov 25 07:34:54 crc kubenswrapper[5043]: I1125 07:34:54.280235 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerName="ceilometer-central-agent" containerID="cri-o://75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df" gracePeriod=30 Nov 25 07:34:54 crc kubenswrapper[5043]: I1125 07:34:54.310879 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.096840859 podStartE2EDuration="5.310857878s" podCreationTimestamp="2025-11-25 07:34:49 +0000 UTC" firstStartedPulling="2025-11-25 07:34:49.9957308 +0000 UTC m=+1154.163926521" lastFinishedPulling="2025-11-25 07:34:53.209747819 +0000 UTC m=+1157.377943540" observedRunningTime="2025-11-25 07:34:54.305652388 +0000 UTC m=+1158.473848129" watchObservedRunningTime="2025-11-25 07:34:54.310857878 +0000 UTC m=+1158.479053599" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.135405 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.268902 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd61e257-d1f2-4d23-a148-9196f3d364b0-run-httpd\") pod \"bd61e257-d1f2-4d23-a148-9196f3d364b0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.268969 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-ceilometer-tls-certs\") pod \"bd61e257-d1f2-4d23-a148-9196f3d364b0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.269025 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-combined-ca-bundle\") pod \"bd61e257-d1f2-4d23-a148-9196f3d364b0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.269097 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd61e257-d1f2-4d23-a148-9196f3d364b0-log-httpd\") pod \"bd61e257-d1f2-4d23-a148-9196f3d364b0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.269121 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-sg-core-conf-yaml\") pod \"bd61e257-d1f2-4d23-a148-9196f3d364b0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.269125 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd61e257-d1f2-4d23-a148-9196f3d364b0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bd61e257-d1f2-4d23-a148-9196f3d364b0" (UID: "bd61e257-d1f2-4d23-a148-9196f3d364b0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.269174 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-scripts\") pod \"bd61e257-d1f2-4d23-a148-9196f3d364b0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.269235 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-config-data\") pod \"bd61e257-d1f2-4d23-a148-9196f3d364b0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.269316 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6t7k\" (UniqueName: \"kubernetes.io/projected/bd61e257-d1f2-4d23-a148-9196f3d364b0-kube-api-access-k6t7k\") pod \"bd61e257-d1f2-4d23-a148-9196f3d364b0\" (UID: \"bd61e257-d1f2-4d23-a148-9196f3d364b0\") " Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.269770 5043 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd61e257-d1f2-4d23-a148-9196f3d364b0-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.270893 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd61e257-d1f2-4d23-a148-9196f3d364b0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bd61e257-d1f2-4d23-a148-9196f3d364b0" (UID: "bd61e257-d1f2-4d23-a148-9196f3d364b0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.275018 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd61e257-d1f2-4d23-a148-9196f3d364b0-kube-api-access-k6t7k" (OuterVolumeSpecName: "kube-api-access-k6t7k") pod "bd61e257-d1f2-4d23-a148-9196f3d364b0" (UID: "bd61e257-d1f2-4d23-a148-9196f3d364b0"). InnerVolumeSpecName "kube-api-access-k6t7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.277009 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-scripts" (OuterVolumeSpecName: "scripts") pod "bd61e257-d1f2-4d23-a148-9196f3d364b0" (UID: "bd61e257-d1f2-4d23-a148-9196f3d364b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.293432 5043 generic.go:334] "Generic (PLEG): container finished" podID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerID="f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e" exitCode=0 Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.293470 5043 generic.go:334] "Generic (PLEG): container finished" podID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerID="52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd" exitCode=2 Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.293482 5043 generic.go:334] "Generic (PLEG): container finished" podID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerID="bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553" exitCode=0 Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.293493 5043 generic.go:334] "Generic (PLEG): container finished" podID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerID="75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df" exitCode=0 Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.293517 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd61e257-d1f2-4d23-a148-9196f3d364b0","Type":"ContainerDied","Data":"f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e"} Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.293544 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd61e257-d1f2-4d23-a148-9196f3d364b0","Type":"ContainerDied","Data":"52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd"} Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.293557 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd61e257-d1f2-4d23-a148-9196f3d364b0","Type":"ContainerDied","Data":"bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553"} Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.293569 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd61e257-d1f2-4d23-a148-9196f3d364b0","Type":"ContainerDied","Data":"75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df"} Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.293580 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bd61e257-d1f2-4d23-a148-9196f3d364b0","Type":"ContainerDied","Data":"9219ebf18270772790de66c8a7bec21cd9b1a967eca4d8d2c49aef279afd3028"} Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.293617 5043 scope.go:117] "RemoveContainer" containerID="f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.293763 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.304840 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bd61e257-d1f2-4d23-a148-9196f3d364b0" (UID: "bd61e257-d1f2-4d23-a148-9196f3d364b0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.328704 5043 scope.go:117] "RemoveContainer" containerID="52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.341779 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bd61e257-d1f2-4d23-a148-9196f3d364b0" (UID: "bd61e257-d1f2-4d23-a148-9196f3d364b0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.353879 5043 scope.go:117] "RemoveContainer" containerID="bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.354136 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.357937 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-config-data" (OuterVolumeSpecName: "config-data") pod "bd61e257-d1f2-4d23-a148-9196f3d364b0" (UID: "bd61e257-d1f2-4d23-a148-9196f3d364b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.361424 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd61e257-d1f2-4d23-a148-9196f3d364b0" (UID: "bd61e257-d1f2-4d23-a148-9196f3d364b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.371656 5043 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bd61e257-d1f2-4d23-a148-9196f3d364b0-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.371686 5043 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.371705 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.371720 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.371738 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6t7k\" (UniqueName: \"kubernetes.io/projected/bd61e257-d1f2-4d23-a148-9196f3d364b0-kube-api-access-k6t7k\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.371755 5043 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.371772 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd61e257-d1f2-4d23-a148-9196f3d364b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.371810 5043 scope.go:117] "RemoveContainer" containerID="75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.390570 5043 scope.go:117] "RemoveContainer" containerID="f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e" Nov 25 07:34:55 crc kubenswrapper[5043]: E1125 07:34:55.391014 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e\": container with ID starting with f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e not found: ID does not exist" containerID="f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.391051 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e"} err="failed to get container status \"f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e\": rpc error: code = NotFound desc = could not find container \"f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e\": container with ID starting with f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e not found: ID does not exist" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.391076 5043 scope.go:117] "RemoveContainer" containerID="52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd" Nov 25 07:34:55 crc kubenswrapper[5043]: E1125 07:34:55.391437 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd\": container with ID starting with 52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd not found: ID does not exist" containerID="52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.391458 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd"} err="failed to get container status \"52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd\": rpc error: code = NotFound desc = could not find container \"52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd\": container with ID starting with 52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd not found: ID does not exist" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.391470 5043 scope.go:117] "RemoveContainer" containerID="bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553" Nov 25 07:34:55 crc kubenswrapper[5043]: E1125 07:34:55.391982 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553\": container with ID starting with bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553 not found: ID does not exist" containerID="bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.392021 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553"} err="failed to get container status \"bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553\": rpc error: code = NotFound desc = could not find container \"bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553\": container with ID starting with bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553 not found: ID does not exist" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.392050 5043 scope.go:117] "RemoveContainer" containerID="75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df" Nov 25 07:34:55 crc kubenswrapper[5043]: E1125 07:34:55.392354 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df\": container with ID starting with 75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df not found: ID does not exist" containerID="75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.392381 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df"} err="failed to get container status \"75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df\": rpc error: code = NotFound desc = could not find container \"75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df\": container with ID starting with 75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df not found: ID does not exist" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.392398 5043 scope.go:117] "RemoveContainer" containerID="f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.393683 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e"} err="failed to get container status \"f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e\": rpc error: code = NotFound desc = could not find container \"f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e\": container with ID starting with f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e not found: ID does not exist" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.393738 5043 scope.go:117] "RemoveContainer" containerID="52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.394006 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd"} err="failed to get container status \"52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd\": rpc error: code = NotFound desc = could not find container \"52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd\": container with ID starting with 52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd not found: ID does not exist" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.394025 5043 scope.go:117] "RemoveContainer" containerID="bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.394293 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553"} err="failed to get container status \"bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553\": rpc error: code = NotFound desc = could not find container \"bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553\": container with ID starting with bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553 not found: ID does not exist" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.394355 5043 scope.go:117] "RemoveContainer" containerID="75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.394783 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df"} err="failed to get container status \"75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df\": rpc error: code = NotFound desc = could not find container \"75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df\": container with ID starting with 75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df not found: ID does not exist" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.394861 5043 scope.go:117] "RemoveContainer" containerID="f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.395261 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e"} err="failed to get container status \"f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e\": rpc error: code = NotFound desc = could not find container \"f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e\": container with ID starting with f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e not found: ID does not exist" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.395283 5043 scope.go:117] "RemoveContainer" containerID="52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.395702 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd"} err="failed to get container status \"52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd\": rpc error: code = NotFound desc = could not find container \"52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd\": container with ID starting with 52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd not found: ID does not exist" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.395736 5043 scope.go:117] "RemoveContainer" containerID="bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.396014 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553"} err="failed to get container status \"bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553\": rpc error: code = NotFound desc = could not find container \"bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553\": container with ID starting with bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553 not found: ID does not exist" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.396045 5043 scope.go:117] "RemoveContainer" containerID="75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.396326 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df"} err="failed to get container status \"75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df\": rpc error: code = NotFound desc = could not find container \"75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df\": container with ID starting with 75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df not found: ID does not exist" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.396349 5043 scope.go:117] "RemoveContainer" containerID="f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.396726 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e"} err="failed to get container status \"f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e\": rpc error: code = NotFound desc = could not find container \"f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e\": container with ID starting with f2fad0da1bec5d02be182086cf937ba72cce2c1ac1356fb67dc940fd1ffb8a0e not found: ID does not exist" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.396750 5043 scope.go:117] "RemoveContainer" containerID="52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.397028 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd"} err="failed to get container status \"52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd\": rpc error: code = NotFound desc = could not find container \"52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd\": container with ID starting with 52b0e2196d241bc2bebe90bf0a1366476b610864a4225c4b99be34f25f92d4fd not found: ID does not exist" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.397049 5043 scope.go:117] "RemoveContainer" containerID="bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.397302 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553"} err="failed to get container status \"bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553\": rpc error: code = NotFound desc = could not find container \"bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553\": container with ID starting with bb8e40895681c97f44d60526f3978326d396c4c8ef4f7ca3e2ead759f5f33553 not found: ID does not exist" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.397328 5043 scope.go:117] "RemoveContainer" containerID="75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.397594 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df"} err="failed to get container status \"75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df\": rpc error: code = NotFound desc = could not find container \"75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df\": container with ID starting with 75b093447b048e832a42b7d625c0b2f8c245918940754f6597fe83268b9712df not found: ID does not exist" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.577210 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.628507 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.634564 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.657961 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:55 crc kubenswrapper[5043]: E1125 07:34:55.658420 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cbed283-5e56-4d9b-b749-ab8b4808834e" containerName="mariadb-database-create" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.658444 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cbed283-5e56-4d9b-b749-ab8b4808834e" containerName="mariadb-database-create" Nov 25 07:34:55 crc kubenswrapper[5043]: E1125 07:34:55.658469 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerName="ceilometer-notification-agent" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.658478 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerName="ceilometer-notification-agent" Nov 25 07:34:55 crc kubenswrapper[5043]: E1125 07:34:55.658497 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerName="sg-core" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.658506 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerName="sg-core" Nov 25 07:34:55 crc kubenswrapper[5043]: E1125 07:34:55.658521 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerName="proxy-httpd" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.658528 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerName="proxy-httpd" Nov 25 07:34:55 crc kubenswrapper[5043]: E1125 07:34:55.658543 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerName="ceilometer-central-agent" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.658553 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerName="ceilometer-central-agent" Nov 25 07:34:55 crc kubenswrapper[5043]: E1125 07:34:55.658566 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82277a9-e2eb-45db-a8ac-ae5cd7f162d3" containerName="mariadb-account-create" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.658573 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82277a9-e2eb-45db-a8ac-ae5cd7f162d3" containerName="mariadb-account-create" Nov 25 07:34:55 crc kubenswrapper[5043]: E1125 07:34:55.658586 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3801a63c-ffa6-49c8-8bf9-2bafdad18466" containerName="mariadb-database-create" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.658594 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3801a63c-ffa6-49c8-8bf9-2bafdad18466" containerName="mariadb-database-create" Nov 25 07:34:55 crc kubenswrapper[5043]: E1125 07:34:55.658630 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56abad80-cda8-4d71-a70f-4761414adb87" containerName="mariadb-account-create" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.658640 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="56abad80-cda8-4d71-a70f-4761414adb87" containerName="mariadb-account-create" Nov 25 07:34:55 crc kubenswrapper[5043]: E1125 07:34:55.658650 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c4413b-cd74-4ad4-978b-089edd47d7b3" containerName="mariadb-database-create" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.658657 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c4413b-cd74-4ad4-978b-089edd47d7b3" containerName="mariadb-database-create" Nov 25 07:34:55 crc kubenswrapper[5043]: E1125 07:34:55.658674 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c8ab01-08e3-4b78-a215-c1382e53c98f" containerName="mariadb-account-create" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.658682 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c8ab01-08e3-4b78-a215-c1382e53c98f" containerName="mariadb-account-create" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.658923 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c4413b-cd74-4ad4-978b-089edd47d7b3" containerName="mariadb-database-create" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.658941 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="3801a63c-ffa6-49c8-8bf9-2bafdad18466" containerName="mariadb-database-create" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.658957 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c8ab01-08e3-4b78-a215-c1382e53c98f" containerName="mariadb-account-create" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.658971 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerName="ceilometer-notification-agent" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.658987 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerName="ceilometer-central-agent" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.658996 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82277a9-e2eb-45db-a8ac-ae5cd7f162d3" containerName="mariadb-account-create" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.659009 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerName="sg-core" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.659019 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="56abad80-cda8-4d71-a70f-4761414adb87" containerName="mariadb-account-create" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.659027 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cbed283-5e56-4d9b-b749-ab8b4808834e" containerName="mariadb-database-create" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.659040 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd61e257-d1f2-4d23-a148-9196f3d364b0" containerName="proxy-httpd" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.662035 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.664839 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.665018 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.665960 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.669154 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.779766 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.779829 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.779856 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.779892 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15bbade-6883-4984-8833-7307f59e0831-run-httpd\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.779950 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5lkg\" (UniqueName: \"kubernetes.io/projected/b15bbade-6883-4984-8833-7307f59e0831-kube-api-access-f5lkg\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.780015 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-scripts\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.780182 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15bbade-6883-4984-8833-7307f59e0831-log-httpd\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.780241 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-config-data\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.881847 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-scripts\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.882168 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15bbade-6883-4984-8833-7307f59e0831-log-httpd\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.882194 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-config-data\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.882741 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15bbade-6883-4984-8833-7307f59e0831-log-httpd\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.882844 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.883268 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.883305 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.883376 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15bbade-6883-4984-8833-7307f59e0831-run-httpd\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.883445 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5lkg\" (UniqueName: \"kubernetes.io/projected/b15bbade-6883-4984-8833-7307f59e0831-kube-api-access-f5lkg\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.884199 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15bbade-6883-4984-8833-7307f59e0831-run-httpd\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.887169 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-b5bc7cfb-2sfts" podUID="9955ab7e-1d74-461a-a9b2-73e9f82d48fe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.887272 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.887513 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.889893 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.889966 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-scripts\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.891733 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-config-data\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.901883 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.914595 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5lkg\" (UniqueName: \"kubernetes.io/projected/b15bbade-6883-4984-8833-7307f59e0831-kube-api-access-f5lkg\") pod \"ceilometer-0\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " pod="openstack/ceilometer-0" Nov 25 07:34:55 crc kubenswrapper[5043]: I1125 07:34:55.995275 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:34:56 crc kubenswrapper[5043]: I1125 07:34:56.746579 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:56 crc kubenswrapper[5043]: W1125 07:34:56.749086 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb15bbade_6883_4984_8833_7307f59e0831.slice/crio-886feba65ba8012dca9433f46907ca4be942528b83f3ac14e6542de2f3dbcb58 WatchSource:0}: Error finding container 886feba65ba8012dca9433f46907ca4be942528b83f3ac14e6542de2f3dbcb58: Status 404 returned error can't find the container with id 886feba65ba8012dca9433f46907ca4be942528b83f3ac14e6542de2f3dbcb58 Nov 25 07:34:56 crc kubenswrapper[5043]: I1125 07:34:56.972893 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd61e257-d1f2-4d23-a148-9196f3d364b0" path="/var/lib/kubelet/pods/bd61e257-d1f2-4d23-a148-9196f3d364b0/volumes" Nov 25 07:34:57 crc kubenswrapper[5043]: I1125 07:34:57.316889 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15bbade-6883-4984-8833-7307f59e0831","Type":"ContainerStarted","Data":"886feba65ba8012dca9433f46907ca4be942528b83f3ac14e6542de2f3dbcb58"} Nov 25 07:34:57 crc kubenswrapper[5043]: I1125 07:34:57.441501 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2sd2w"] Nov 25 07:34:57 crc kubenswrapper[5043]: I1125 07:34:57.442592 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2sd2w" Nov 25 07:34:57 crc kubenswrapper[5043]: W1125 07:34:57.444452 5043 reflector.go:561] object-"openstack"/"nova-cell0-conductor-config-data": failed to list *v1.Secret: secrets "nova-cell0-conductor-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Nov 25 07:34:57 crc kubenswrapper[5043]: E1125 07:34:57.444483 5043 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell0-conductor-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-cell0-conductor-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 07:34:57 crc kubenswrapper[5043]: W1125 07:34:57.444660 5043 reflector.go:561] object-"openstack"/"nova-nova-dockercfg-rphjt": failed to list *v1.Secret: secrets "nova-nova-dockercfg-rphjt" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Nov 25 07:34:57 crc kubenswrapper[5043]: E1125 07:34:57.444731 5043 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-nova-dockercfg-rphjt\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-nova-dockercfg-rphjt\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 07:34:57 crc kubenswrapper[5043]: W1125 07:34:57.449380 5043 reflector.go:561] object-"openstack"/"nova-cell0-conductor-scripts": failed to list *v1.Secret: secrets "nova-cell0-conductor-scripts" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Nov 25 07:34:57 crc kubenswrapper[5043]: E1125 07:34:57.449413 5043 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell0-conductor-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-cell0-conductor-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 07:34:57 crc kubenswrapper[5043]: I1125 07:34:57.465531 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2sd2w"] Nov 25 07:34:57 crc kubenswrapper[5043]: I1125 07:34:57.510774 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2sd2w\" (UID: \"c561c664-fb47-4c58-971f-b32fe1256a9f\") " pod="openstack/nova-cell0-conductor-db-sync-2sd2w" Nov 25 07:34:57 crc kubenswrapper[5043]: I1125 07:34:57.510866 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zq2h\" (UniqueName: \"kubernetes.io/projected/c561c664-fb47-4c58-971f-b32fe1256a9f-kube-api-access-5zq2h\") pod \"nova-cell0-conductor-db-sync-2sd2w\" (UID: \"c561c664-fb47-4c58-971f-b32fe1256a9f\") " pod="openstack/nova-cell0-conductor-db-sync-2sd2w" Nov 25 07:34:57 crc kubenswrapper[5043]: I1125 07:34:57.510923 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-config-data\") pod \"nova-cell0-conductor-db-sync-2sd2w\" (UID: \"c561c664-fb47-4c58-971f-b32fe1256a9f\") " pod="openstack/nova-cell0-conductor-db-sync-2sd2w" Nov 25 07:34:57 crc kubenswrapper[5043]: I1125 07:34:57.510940 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-scripts\") pod \"nova-cell0-conductor-db-sync-2sd2w\" (UID: \"c561c664-fb47-4c58-971f-b32fe1256a9f\") " pod="openstack/nova-cell0-conductor-db-sync-2sd2w" Nov 25 07:34:57 crc kubenswrapper[5043]: I1125 07:34:57.613300 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-config-data\") pod \"nova-cell0-conductor-db-sync-2sd2w\" (UID: \"c561c664-fb47-4c58-971f-b32fe1256a9f\") " pod="openstack/nova-cell0-conductor-db-sync-2sd2w" Nov 25 07:34:57 crc kubenswrapper[5043]: I1125 07:34:57.613364 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-scripts\") pod \"nova-cell0-conductor-db-sync-2sd2w\" (UID: \"c561c664-fb47-4c58-971f-b32fe1256a9f\") " pod="openstack/nova-cell0-conductor-db-sync-2sd2w" Nov 25 07:34:57 crc kubenswrapper[5043]: I1125 07:34:57.613471 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2sd2w\" (UID: \"c561c664-fb47-4c58-971f-b32fe1256a9f\") " pod="openstack/nova-cell0-conductor-db-sync-2sd2w" Nov 25 07:34:57 crc kubenswrapper[5043]: I1125 07:34:57.613539 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zq2h\" (UniqueName: \"kubernetes.io/projected/c561c664-fb47-4c58-971f-b32fe1256a9f-kube-api-access-5zq2h\") pod \"nova-cell0-conductor-db-sync-2sd2w\" (UID: \"c561c664-fb47-4c58-971f-b32fe1256a9f\") " pod="openstack/nova-cell0-conductor-db-sync-2sd2w" Nov 25 07:34:57 crc kubenswrapper[5043]: I1125 07:34:57.617965 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2sd2w\" (UID: \"c561c664-fb47-4c58-971f-b32fe1256a9f\") " pod="openstack/nova-cell0-conductor-db-sync-2sd2w" Nov 25 07:34:57 crc kubenswrapper[5043]: I1125 07:34:57.629505 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zq2h\" (UniqueName: \"kubernetes.io/projected/c561c664-fb47-4c58-971f-b32fe1256a9f-kube-api-access-5zq2h\") pod \"nova-cell0-conductor-db-sync-2sd2w\" (UID: \"c561c664-fb47-4c58-971f-b32fe1256a9f\") " pod="openstack/nova-cell0-conductor-db-sync-2sd2w" Nov 25 07:34:57 crc kubenswrapper[5043]: I1125 07:34:57.669580 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:34:58 crc kubenswrapper[5043]: I1125 07:34:58.280988 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 07:34:58 crc kubenswrapper[5043]: I1125 07:34:58.290383 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-config-data\") pod \"nova-cell0-conductor-db-sync-2sd2w\" (UID: \"c561c664-fb47-4c58-971f-b32fe1256a9f\") " pod="openstack/nova-cell0-conductor-db-sync-2sd2w" Nov 25 07:34:58 crc kubenswrapper[5043]: I1125 07:34:58.324801 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15bbade-6883-4984-8833-7307f59e0831","Type":"ContainerStarted","Data":"42e5361f1870b9c8b1350e984325261cc336ff7f019eaff89ad35322c18ef198"} Nov 25 07:34:58 crc kubenswrapper[5043]: E1125 07:34:58.613793 5043 secret.go:188] Couldn't get secret openstack/nova-cell0-conductor-scripts: failed to sync secret cache: timed out waiting for the condition Nov 25 07:34:58 crc kubenswrapper[5043]: E1125 07:34:58.613884 5043 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-scripts podName:c561c664-fb47-4c58-971f-b32fe1256a9f nodeName:}" failed. No retries permitted until 2025-11-25 07:34:59.11386628 +0000 UTC m=+1163.282062001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-scripts") pod "nova-cell0-conductor-db-sync-2sd2w" (UID: "c561c664-fb47-4c58-971f-b32fe1256a9f") : failed to sync secret cache: timed out waiting for the condition Nov 25 07:34:58 crc kubenswrapper[5043]: I1125 07:34:58.646972 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rphjt" Nov 25 07:34:58 crc kubenswrapper[5043]: I1125 07:34:58.882295 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 25 07:34:59 crc kubenswrapper[5043]: I1125 07:34:59.137307 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-scripts\") pod \"nova-cell0-conductor-db-sync-2sd2w\" (UID: \"c561c664-fb47-4c58-971f-b32fe1256a9f\") " pod="openstack/nova-cell0-conductor-db-sync-2sd2w" Nov 25 07:34:59 crc kubenswrapper[5043]: I1125 07:34:59.142042 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-scripts\") pod \"nova-cell0-conductor-db-sync-2sd2w\" (UID: \"c561c664-fb47-4c58-971f-b32fe1256a9f\") " pod="openstack/nova-cell0-conductor-db-sync-2sd2w" Nov 25 07:34:59 crc kubenswrapper[5043]: I1125 07:34:59.262127 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2sd2w" Nov 25 07:34:59 crc kubenswrapper[5043]: I1125 07:34:59.347877 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15bbade-6883-4984-8833-7307f59e0831","Type":"ContainerStarted","Data":"46780367a7d7f7fae90de7bca16760d5c34ccb65ca33bbaeaabdaa5ff9604bb5"} Nov 25 07:34:59 crc kubenswrapper[5043]: I1125 07:34:59.418712 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 07:34:59 crc kubenswrapper[5043]: W1125 07:34:59.759808 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc561c664_fb47_4c58_971f_b32fe1256a9f.slice/crio-8835ae1db8e982c96d99e0b8a5710c0fe2a457f39e03c66977cd791f081a34d2 WatchSource:0}: Error finding container 8835ae1db8e982c96d99e0b8a5710c0fe2a457f39e03c66977cd791f081a34d2: Status 404 returned error can't find the container with id 8835ae1db8e982c96d99e0b8a5710c0fe2a457f39e03c66977cd791f081a34d2 Nov 25 07:34:59 crc kubenswrapper[5043]: I1125 07:34:59.761230 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2sd2w"] Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.363354 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2sd2w" event={"ID":"c561c664-fb47-4c58-971f-b32fe1256a9f","Type":"ContainerStarted","Data":"8835ae1db8e982c96d99e0b8a5710c0fe2a457f39e03c66977cd791f081a34d2"} Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.369118 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15bbade-6883-4984-8833-7307f59e0831","Type":"ContainerStarted","Data":"ef198b067193edcb46c5a1c1d2d211ab55e71e94150f4e7a8db533f570eb4f8a"} Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.846376 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.871119 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-horizon-secret-key\") pod \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.871176 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-scripts\") pod \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.871262 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmwd9\" (UniqueName: \"kubernetes.io/projected/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-kube-api-access-qmwd9\") pod \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.871297 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-horizon-tls-certs\") pod \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.871354 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-logs\") pod \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.871446 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-combined-ca-bundle\") pod \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.871499 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-config-data\") pod \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\" (UID: \"9955ab7e-1d74-461a-a9b2-73e9f82d48fe\") " Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.875284 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-logs" (OuterVolumeSpecName: "logs") pod "9955ab7e-1d74-461a-a9b2-73e9f82d48fe" (UID: "9955ab7e-1d74-461a-a9b2-73e9f82d48fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.882685 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-kube-api-access-qmwd9" (OuterVolumeSpecName: "kube-api-access-qmwd9") pod "9955ab7e-1d74-461a-a9b2-73e9f82d48fe" (UID: "9955ab7e-1d74-461a-a9b2-73e9f82d48fe"). InnerVolumeSpecName "kube-api-access-qmwd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.887373 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9955ab7e-1d74-461a-a9b2-73e9f82d48fe" (UID: "9955ab7e-1d74-461a-a9b2-73e9f82d48fe"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.932884 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9955ab7e-1d74-461a-a9b2-73e9f82d48fe" (UID: "9955ab7e-1d74-461a-a9b2-73e9f82d48fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.933773 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-config-data" (OuterVolumeSpecName: "config-data") pod "9955ab7e-1d74-461a-a9b2-73e9f82d48fe" (UID: "9955ab7e-1d74-461a-a9b2-73e9f82d48fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.935744 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-scripts" (OuterVolumeSpecName: "scripts") pod "9955ab7e-1d74-461a-a9b2-73e9f82d48fe" (UID: "9955ab7e-1d74-461a-a9b2-73e9f82d48fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.972913 5043 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.972948 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.972959 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmwd9\" (UniqueName: \"kubernetes.io/projected/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-kube-api-access-qmwd9\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.972970 5043 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.972980 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.972989 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:00 crc kubenswrapper[5043]: I1125 07:35:00.978393 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "9955ab7e-1d74-461a-a9b2-73e9f82d48fe" (UID: "9955ab7e-1d74-461a-a9b2-73e9f82d48fe"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:01 crc kubenswrapper[5043]: I1125 07:35:01.079836 5043 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9955ab7e-1d74-461a-a9b2-73e9f82d48fe-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:01 crc kubenswrapper[5043]: I1125 07:35:01.385170 5043 generic.go:334] "Generic (PLEG): container finished" podID="9955ab7e-1d74-461a-a9b2-73e9f82d48fe" containerID="7c5e903c6193703b2095fdf682cae9d61dc7437177c9fae106df613eaa3dfb94" exitCode=137 Nov 25 07:35:01 crc kubenswrapper[5043]: I1125 07:35:01.385226 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b5bc7cfb-2sfts" event={"ID":"9955ab7e-1d74-461a-a9b2-73e9f82d48fe","Type":"ContainerDied","Data":"7c5e903c6193703b2095fdf682cae9d61dc7437177c9fae106df613eaa3dfb94"} Nov 25 07:35:01 crc kubenswrapper[5043]: I1125 07:35:01.385255 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b5bc7cfb-2sfts" event={"ID":"9955ab7e-1d74-461a-a9b2-73e9f82d48fe","Type":"ContainerDied","Data":"8ad04e3855a8b7856595c75fe8b6095c1a1bffe4d3086fa6e63258cf98523c3b"} Nov 25 07:35:01 crc kubenswrapper[5043]: I1125 07:35:01.385227 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b5bc7cfb-2sfts" Nov 25 07:35:01 crc kubenswrapper[5043]: I1125 07:35:01.385284 5043 scope.go:117] "RemoveContainer" containerID="97259a18989a2bb9b137543868fb1784b4ec1f8d5e0c1ed6ac4074b3fb57c7c5" Nov 25 07:35:01 crc kubenswrapper[5043]: I1125 07:35:01.500763 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b5bc7cfb-2sfts"] Nov 25 07:35:01 crc kubenswrapper[5043]: I1125 07:35:01.513806 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-b5bc7cfb-2sfts"] Nov 25 07:35:01 crc kubenswrapper[5043]: I1125 07:35:01.653335 5043 scope.go:117] "RemoveContainer" containerID="7c5e903c6193703b2095fdf682cae9d61dc7437177c9fae106df613eaa3dfb94" Nov 25 07:35:01 crc kubenswrapper[5043]: I1125 07:35:01.674131 5043 scope.go:117] "RemoveContainer" containerID="97259a18989a2bb9b137543868fb1784b4ec1f8d5e0c1ed6ac4074b3fb57c7c5" Nov 25 07:35:01 crc kubenswrapper[5043]: E1125 07:35:01.674505 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97259a18989a2bb9b137543868fb1784b4ec1f8d5e0c1ed6ac4074b3fb57c7c5\": container with ID starting with 97259a18989a2bb9b137543868fb1784b4ec1f8d5e0c1ed6ac4074b3fb57c7c5 not found: ID does not exist" containerID="97259a18989a2bb9b137543868fb1784b4ec1f8d5e0c1ed6ac4074b3fb57c7c5" Nov 25 07:35:01 crc kubenswrapper[5043]: I1125 07:35:01.674543 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97259a18989a2bb9b137543868fb1784b4ec1f8d5e0c1ed6ac4074b3fb57c7c5"} err="failed to get container status \"97259a18989a2bb9b137543868fb1784b4ec1f8d5e0c1ed6ac4074b3fb57c7c5\": rpc error: code = NotFound desc = could not find container \"97259a18989a2bb9b137543868fb1784b4ec1f8d5e0c1ed6ac4074b3fb57c7c5\": container with ID starting with 97259a18989a2bb9b137543868fb1784b4ec1f8d5e0c1ed6ac4074b3fb57c7c5 not found: ID does not exist" Nov 25 07:35:01 crc kubenswrapper[5043]: I1125 07:35:01.674569 5043 scope.go:117] "RemoveContainer" containerID="7c5e903c6193703b2095fdf682cae9d61dc7437177c9fae106df613eaa3dfb94" Nov 25 07:35:01 crc kubenswrapper[5043]: E1125 07:35:01.674864 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c5e903c6193703b2095fdf682cae9d61dc7437177c9fae106df613eaa3dfb94\": container with ID starting with 7c5e903c6193703b2095fdf682cae9d61dc7437177c9fae106df613eaa3dfb94 not found: ID does not exist" containerID="7c5e903c6193703b2095fdf682cae9d61dc7437177c9fae106df613eaa3dfb94" Nov 25 07:35:01 crc kubenswrapper[5043]: I1125 07:35:01.674891 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c5e903c6193703b2095fdf682cae9d61dc7437177c9fae106df613eaa3dfb94"} err="failed to get container status \"7c5e903c6193703b2095fdf682cae9d61dc7437177c9fae106df613eaa3dfb94\": rpc error: code = NotFound desc = could not find container \"7c5e903c6193703b2095fdf682cae9d61dc7437177c9fae106df613eaa3dfb94\": container with ID starting with 7c5e903c6193703b2095fdf682cae9d61dc7437177c9fae106df613eaa3dfb94 not found: ID does not exist" Nov 25 07:35:02 crc kubenswrapper[5043]: I1125 07:35:02.398558 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15bbade-6883-4984-8833-7307f59e0831","Type":"ContainerStarted","Data":"6798a072d44bb0f9774747da1b0c332bd2cd89c808205ea904c7aadc67eb65f6"} Nov 25 07:35:02 crc kubenswrapper[5043]: I1125 07:35:02.398813 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b15bbade-6883-4984-8833-7307f59e0831" containerName="ceilometer-central-agent" containerID="cri-o://42e5361f1870b9c8b1350e984325261cc336ff7f019eaff89ad35322c18ef198" gracePeriod=30 Nov 25 07:35:02 crc kubenswrapper[5043]: I1125 07:35:02.399246 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 07:35:02 crc kubenswrapper[5043]: I1125 07:35:02.399282 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b15bbade-6883-4984-8833-7307f59e0831" containerName="proxy-httpd" containerID="cri-o://6798a072d44bb0f9774747da1b0c332bd2cd89c808205ea904c7aadc67eb65f6" gracePeriod=30 Nov 25 07:35:02 crc kubenswrapper[5043]: I1125 07:35:02.399339 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b15bbade-6883-4984-8833-7307f59e0831" containerName="sg-core" containerID="cri-o://ef198b067193edcb46c5a1c1d2d211ab55e71e94150f4e7a8db533f570eb4f8a" gracePeriod=30 Nov 25 07:35:02 crc kubenswrapper[5043]: I1125 07:35:02.399381 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b15bbade-6883-4984-8833-7307f59e0831" containerName="ceilometer-notification-agent" containerID="cri-o://46780367a7d7f7fae90de7bca16760d5c34ccb65ca33bbaeaabdaa5ff9604bb5" gracePeriod=30 Nov 25 07:35:02 crc kubenswrapper[5043]: I1125 07:35:02.427989 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.8870888409999997 podStartE2EDuration="7.427968772s" podCreationTimestamp="2025-11-25 07:34:55 +0000 UTC" firstStartedPulling="2025-11-25 07:34:56.751760035 +0000 UTC m=+1160.919955756" lastFinishedPulling="2025-11-25 07:35:01.292639956 +0000 UTC m=+1165.460835687" observedRunningTime="2025-11-25 07:35:02.424341785 +0000 UTC m=+1166.592537506" watchObservedRunningTime="2025-11-25 07:35:02.427968772 +0000 UTC m=+1166.596164493" Nov 25 07:35:02 crc kubenswrapper[5043]: I1125 07:35:02.974421 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9955ab7e-1d74-461a-a9b2-73e9f82d48fe" path="/var/lib/kubelet/pods/9955ab7e-1d74-461a-a9b2-73e9f82d48fe/volumes" Nov 25 07:35:03 crc kubenswrapper[5043]: I1125 07:35:03.425046 5043 generic.go:334] "Generic (PLEG): container finished" podID="b15bbade-6883-4984-8833-7307f59e0831" containerID="6798a072d44bb0f9774747da1b0c332bd2cd89c808205ea904c7aadc67eb65f6" exitCode=0 Nov 25 07:35:03 crc kubenswrapper[5043]: I1125 07:35:03.425311 5043 generic.go:334] "Generic (PLEG): container finished" podID="b15bbade-6883-4984-8833-7307f59e0831" containerID="ef198b067193edcb46c5a1c1d2d211ab55e71e94150f4e7a8db533f570eb4f8a" exitCode=2 Nov 25 07:35:03 crc kubenswrapper[5043]: I1125 07:35:03.425321 5043 generic.go:334] "Generic (PLEG): container finished" podID="b15bbade-6883-4984-8833-7307f59e0831" containerID="46780367a7d7f7fae90de7bca16760d5c34ccb65ca33bbaeaabdaa5ff9604bb5" exitCode=0 Nov 25 07:35:03 crc kubenswrapper[5043]: I1125 07:35:03.425112 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15bbade-6883-4984-8833-7307f59e0831","Type":"ContainerDied","Data":"6798a072d44bb0f9774747da1b0c332bd2cd89c808205ea904c7aadc67eb65f6"} Nov 25 07:35:03 crc kubenswrapper[5043]: I1125 07:35:03.425358 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15bbade-6883-4984-8833-7307f59e0831","Type":"ContainerDied","Data":"ef198b067193edcb46c5a1c1d2d211ab55e71e94150f4e7a8db533f570eb4f8a"} Nov 25 07:35:03 crc kubenswrapper[5043]: I1125 07:35:03.425373 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15bbade-6883-4984-8833-7307f59e0831","Type":"ContainerDied","Data":"46780367a7d7f7fae90de7bca16760d5c34ccb65ca33bbaeaabdaa5ff9604bb5"} Nov 25 07:35:05 crc kubenswrapper[5043]: I1125 07:35:05.454960 5043 generic.go:334] "Generic (PLEG): container finished" podID="b15bbade-6883-4984-8833-7307f59e0831" containerID="42e5361f1870b9c8b1350e984325261cc336ff7f019eaff89ad35322c18ef198" exitCode=0 Nov 25 07:35:05 crc kubenswrapper[5043]: I1125 07:35:05.455048 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15bbade-6883-4984-8833-7307f59e0831","Type":"ContainerDied","Data":"42e5361f1870b9c8b1350e984325261cc336ff7f019eaff89ad35322c18ef198"} Nov 25 07:35:06 crc kubenswrapper[5043]: I1125 07:35:06.879661 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:35:06 crc kubenswrapper[5043]: I1125 07:35:06.980413 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-config-data\") pod \"b15bbade-6883-4984-8833-7307f59e0831\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " Nov 25 07:35:06 crc kubenswrapper[5043]: I1125 07:35:06.980512 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-combined-ca-bundle\") pod \"b15bbade-6883-4984-8833-7307f59e0831\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " Nov 25 07:35:06 crc kubenswrapper[5043]: I1125 07:35:06.980574 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15bbade-6883-4984-8833-7307f59e0831-log-httpd\") pod \"b15bbade-6883-4984-8833-7307f59e0831\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " Nov 25 07:35:06 crc kubenswrapper[5043]: I1125 07:35:06.980645 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-scripts\") pod \"b15bbade-6883-4984-8833-7307f59e0831\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " Nov 25 07:35:06 crc kubenswrapper[5043]: I1125 07:35:06.980671 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-sg-core-conf-yaml\") pod \"b15bbade-6883-4984-8833-7307f59e0831\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " Nov 25 07:35:06 crc kubenswrapper[5043]: I1125 07:35:06.980724 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5lkg\" (UniqueName: \"kubernetes.io/projected/b15bbade-6883-4984-8833-7307f59e0831-kube-api-access-f5lkg\") pod \"b15bbade-6883-4984-8833-7307f59e0831\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " Nov 25 07:35:06 crc kubenswrapper[5043]: I1125 07:35:06.980744 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-ceilometer-tls-certs\") pod \"b15bbade-6883-4984-8833-7307f59e0831\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " Nov 25 07:35:06 crc kubenswrapper[5043]: I1125 07:35:06.980787 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15bbade-6883-4984-8833-7307f59e0831-run-httpd\") pod \"b15bbade-6883-4984-8833-7307f59e0831\" (UID: \"b15bbade-6883-4984-8833-7307f59e0831\") " Nov 25 07:35:06 crc kubenswrapper[5043]: I1125 07:35:06.981425 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b15bbade-6883-4984-8833-7307f59e0831-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b15bbade-6883-4984-8833-7307f59e0831" (UID: "b15bbade-6883-4984-8833-7307f59e0831"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:35:06 crc kubenswrapper[5043]: I1125 07:35:06.982285 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b15bbade-6883-4984-8833-7307f59e0831-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b15bbade-6883-4984-8833-7307f59e0831" (UID: "b15bbade-6883-4984-8833-7307f59e0831"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:35:06 crc kubenswrapper[5043]: I1125 07:35:06.985528 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b15bbade-6883-4984-8833-7307f59e0831-kube-api-access-f5lkg" (OuterVolumeSpecName: "kube-api-access-f5lkg") pod "b15bbade-6883-4984-8833-7307f59e0831" (UID: "b15bbade-6883-4984-8833-7307f59e0831"). InnerVolumeSpecName "kube-api-access-f5lkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:35:06 crc kubenswrapper[5043]: I1125 07:35:06.987195 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-scripts" (OuterVolumeSpecName: "scripts") pod "b15bbade-6883-4984-8833-7307f59e0831" (UID: "b15bbade-6883-4984-8833-7307f59e0831"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.013149 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b15bbade-6883-4984-8833-7307f59e0831" (UID: "b15bbade-6883-4984-8833-7307f59e0831"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.043826 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b15bbade-6883-4984-8833-7307f59e0831" (UID: "b15bbade-6883-4984-8833-7307f59e0831"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.071147 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b15bbade-6883-4984-8833-7307f59e0831" (UID: "b15bbade-6883-4984-8833-7307f59e0831"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.085260 5043 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15bbade-6883-4984-8833-7307f59e0831-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.085305 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.085318 5043 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b15bbade-6883-4984-8833-7307f59e0831-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.085329 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.085343 5043 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.085357 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5lkg\" (UniqueName: \"kubernetes.io/projected/b15bbade-6883-4984-8833-7307f59e0831-kube-api-access-f5lkg\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.085369 5043 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.108880 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-config-data" (OuterVolumeSpecName: "config-data") pod "b15bbade-6883-4984-8833-7307f59e0831" (UID: "b15bbade-6883-4984-8833-7307f59e0831"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.186777 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b15bbade-6883-4984-8833-7307f59e0831-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.474425 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b15bbade-6883-4984-8833-7307f59e0831","Type":"ContainerDied","Data":"886feba65ba8012dca9433f46907ca4be942528b83f3ac14e6542de2f3dbcb58"} Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.474537 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.474891 5043 scope.go:117] "RemoveContainer" containerID="6798a072d44bb0f9774747da1b0c332bd2cd89c808205ea904c7aadc67eb65f6" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.476272 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2sd2w" event={"ID":"c561c664-fb47-4c58-971f-b32fe1256a9f","Type":"ContainerStarted","Data":"68bb4309c19c6454334f889c2400fd1b750050ea652ed311178005778246272d"} Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.501992 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-2sd2w" podStartSLOduration=3.403599494 podStartE2EDuration="10.50197408s" podCreationTimestamp="2025-11-25 07:34:57 +0000 UTC" firstStartedPulling="2025-11-25 07:34:59.767400164 +0000 UTC m=+1163.935595885" lastFinishedPulling="2025-11-25 07:35:06.86577471 +0000 UTC m=+1171.033970471" observedRunningTime="2025-11-25 07:35:07.500392998 +0000 UTC m=+1171.668588739" watchObservedRunningTime="2025-11-25 07:35:07.50197408 +0000 UTC m=+1171.670169811" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.585956 5043 scope.go:117] "RemoveContainer" containerID="ef198b067193edcb46c5a1c1d2d211ab55e71e94150f4e7a8db533f570eb4f8a" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.594126 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.603895 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.616848 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:35:07 crc kubenswrapper[5043]: E1125 07:35:07.617186 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15bbade-6883-4984-8833-7307f59e0831" containerName="ceilometer-notification-agent" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.617199 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bbade-6883-4984-8833-7307f59e0831" containerName="ceilometer-notification-agent" Nov 25 07:35:07 crc kubenswrapper[5043]: E1125 07:35:07.617221 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15bbade-6883-4984-8833-7307f59e0831" containerName="sg-core" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.617227 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bbade-6883-4984-8833-7307f59e0831" containerName="sg-core" Nov 25 07:35:07 crc kubenswrapper[5043]: E1125 07:35:07.617236 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9955ab7e-1d74-461a-a9b2-73e9f82d48fe" containerName="horizon-log" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.617242 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="9955ab7e-1d74-461a-a9b2-73e9f82d48fe" containerName="horizon-log" Nov 25 07:35:07 crc kubenswrapper[5043]: E1125 07:35:07.617260 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15bbade-6883-4984-8833-7307f59e0831" containerName="ceilometer-central-agent" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.617266 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bbade-6883-4984-8833-7307f59e0831" containerName="ceilometer-central-agent" Nov 25 07:35:07 crc kubenswrapper[5043]: E1125 07:35:07.617278 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9955ab7e-1d74-461a-a9b2-73e9f82d48fe" containerName="horizon" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.617283 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="9955ab7e-1d74-461a-a9b2-73e9f82d48fe" containerName="horizon" Nov 25 07:35:07 crc kubenswrapper[5043]: E1125 07:35:07.617294 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15bbade-6883-4984-8833-7307f59e0831" containerName="proxy-httpd" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.617299 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15bbade-6883-4984-8833-7307f59e0831" containerName="proxy-httpd" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.617445 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="9955ab7e-1d74-461a-a9b2-73e9f82d48fe" containerName="horizon" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.617464 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15bbade-6883-4984-8833-7307f59e0831" containerName="ceilometer-notification-agent" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.617475 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15bbade-6883-4984-8833-7307f59e0831" containerName="sg-core" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.617482 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15bbade-6883-4984-8833-7307f59e0831" containerName="proxy-httpd" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.617492 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15bbade-6883-4984-8833-7307f59e0831" containerName="ceilometer-central-agent" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.617503 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="9955ab7e-1d74-461a-a9b2-73e9f82d48fe" containerName="horizon-log" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.620498 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.622796 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.623660 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.623945 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.629082 5043 scope.go:117] "RemoveContainer" containerID="46780367a7d7f7fae90de7bca16760d5c34ccb65ca33bbaeaabdaa5ff9604bb5" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.643631 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.653444 5043 scope.go:117] "RemoveContainer" containerID="42e5361f1870b9c8b1350e984325261cc336ff7f019eaff89ad35322c18ef198" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.697629 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.697686 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.697914 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-log-httpd\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.698022 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.698133 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8kmh\" (UniqueName: \"kubernetes.io/projected/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-kube-api-access-l8kmh\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.698236 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-scripts\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.698280 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-config-data\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.698373 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-run-httpd\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.799813 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.799855 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.799941 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-log-httpd\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.799976 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.800019 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8kmh\" (UniqueName: \"kubernetes.io/projected/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-kube-api-access-l8kmh\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.800051 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-scripts\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.800072 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-config-data\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.800120 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-run-httpd\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.800510 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-log-httpd\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.800570 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-run-httpd\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.812408 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.812425 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.812997 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-scripts\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.813901 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.815453 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8kmh\" (UniqueName: \"kubernetes.io/projected/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-kube-api-access-l8kmh\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.820903 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-config-data\") pod \"ceilometer-0\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " pod="openstack/ceilometer-0" Nov 25 07:35:07 crc kubenswrapper[5043]: I1125 07:35:07.953038 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:35:08 crc kubenswrapper[5043]: I1125 07:35:08.410362 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:35:08 crc kubenswrapper[5043]: W1125 07:35:08.430032 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1ede7cc_6809_4219_97b4_9e3ee59f3a4b.slice/crio-2877b8fd4f3cf5e969e265b5cccecc7cd57f09633590c7caf3f5ed3da9a943f6 WatchSource:0}: Error finding container 2877b8fd4f3cf5e969e265b5cccecc7cd57f09633590c7caf3f5ed3da9a943f6: Status 404 returned error can't find the container with id 2877b8fd4f3cf5e969e265b5cccecc7cd57f09633590c7caf3f5ed3da9a943f6 Nov 25 07:35:08 crc kubenswrapper[5043]: I1125 07:35:08.490453 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b","Type":"ContainerStarted","Data":"2877b8fd4f3cf5e969e265b5cccecc7cd57f09633590c7caf3f5ed3da9a943f6"} Nov 25 07:35:08 crc kubenswrapper[5043]: I1125 07:35:08.980800 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b15bbade-6883-4984-8833-7307f59e0831" path="/var/lib/kubelet/pods/b15bbade-6883-4984-8833-7307f59e0831/volumes" Nov 25 07:35:09 crc kubenswrapper[5043]: I1125 07:35:09.500129 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b","Type":"ContainerStarted","Data":"7e45ff82a2cca3ccebc7bf8ceca2a19fdf8fb5a40ba9b7935bc6b1fccf3ff32f"} Nov 25 07:35:09 crc kubenswrapper[5043]: I1125 07:35:09.757327 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:35:10 crc kubenswrapper[5043]: I1125 07:35:10.559487 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b","Type":"ContainerStarted","Data":"abcb23cde37c1a7fcaed743d3f855820db3c8f2272ed52026e8034c6ad2720b7"} Nov 25 07:35:11 crc kubenswrapper[5043]: I1125 07:35:11.573096 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b","Type":"ContainerStarted","Data":"32f41a9a9be696dc8ca130b2b6aa4e5e4486e4a6a768419cdffd8d7798dd1023"} Nov 25 07:35:12 crc kubenswrapper[5043]: I1125 07:35:12.586273 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b","Type":"ContainerStarted","Data":"977d1206176e416a94aeba5d6b7988ee3c2bfea2d77a8426c1c443c0fb22d9f9"} Nov 25 07:35:12 crc kubenswrapper[5043]: I1125 07:35:12.586511 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerName="ceilometer-central-agent" containerID="cri-o://7e45ff82a2cca3ccebc7bf8ceca2a19fdf8fb5a40ba9b7935bc6b1fccf3ff32f" gracePeriod=30 Nov 25 07:35:12 crc kubenswrapper[5043]: I1125 07:35:12.586565 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerName="proxy-httpd" containerID="cri-o://977d1206176e416a94aeba5d6b7988ee3c2bfea2d77a8426c1c443c0fb22d9f9" gracePeriod=30 Nov 25 07:35:12 crc kubenswrapper[5043]: I1125 07:35:12.586644 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerName="sg-core" containerID="cri-o://32f41a9a9be696dc8ca130b2b6aa4e5e4486e4a6a768419cdffd8d7798dd1023" gracePeriod=30 Nov 25 07:35:12 crc kubenswrapper[5043]: I1125 07:35:12.586696 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerName="ceilometer-notification-agent" containerID="cri-o://abcb23cde37c1a7fcaed743d3f855820db3c8f2272ed52026e8034c6ad2720b7" gracePeriod=30 Nov 25 07:35:12 crc kubenswrapper[5043]: I1125 07:35:12.586802 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 07:35:12 crc kubenswrapper[5043]: I1125 07:35:12.623127 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.131302494 podStartE2EDuration="5.623101813s" podCreationTimestamp="2025-11-25 07:35:07 +0000 UTC" firstStartedPulling="2025-11-25 07:35:08.433403838 +0000 UTC m=+1172.601599559" lastFinishedPulling="2025-11-25 07:35:11.925203157 +0000 UTC m=+1176.093398878" observedRunningTime="2025-11-25 07:35:12.614819091 +0000 UTC m=+1176.783014852" watchObservedRunningTime="2025-11-25 07:35:12.623101813 +0000 UTC m=+1176.791297554" Nov 25 07:35:13 crc kubenswrapper[5043]: I1125 07:35:13.596460 5043 generic.go:334] "Generic (PLEG): container finished" podID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerID="977d1206176e416a94aeba5d6b7988ee3c2bfea2d77a8426c1c443c0fb22d9f9" exitCode=0 Nov 25 07:35:13 crc kubenswrapper[5043]: I1125 07:35:13.596821 5043 generic.go:334] "Generic (PLEG): container finished" podID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerID="32f41a9a9be696dc8ca130b2b6aa4e5e4486e4a6a768419cdffd8d7798dd1023" exitCode=2 Nov 25 07:35:13 crc kubenswrapper[5043]: I1125 07:35:13.596837 5043 generic.go:334] "Generic (PLEG): container finished" podID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerID="abcb23cde37c1a7fcaed743d3f855820db3c8f2272ed52026e8034c6ad2720b7" exitCode=0 Nov 25 07:35:13 crc kubenswrapper[5043]: I1125 07:35:13.596524 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b","Type":"ContainerDied","Data":"977d1206176e416a94aeba5d6b7988ee3c2bfea2d77a8426c1c443c0fb22d9f9"} Nov 25 07:35:13 crc kubenswrapper[5043]: I1125 07:35:13.596882 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b","Type":"ContainerDied","Data":"32f41a9a9be696dc8ca130b2b6aa4e5e4486e4a6a768419cdffd8d7798dd1023"} Nov 25 07:35:13 crc kubenswrapper[5043]: I1125 07:35:13.596901 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b","Type":"ContainerDied","Data":"abcb23cde37c1a7fcaed743d3f855820db3c8f2272ed52026e8034c6ad2720b7"} Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.135082 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.174504 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-combined-ca-bundle\") pod \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.174552 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-ceilometer-tls-certs\") pod \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.174589 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-config-data\") pod \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.174635 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-scripts\") pod \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.174678 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-log-httpd\") pod \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.174750 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-sg-core-conf-yaml\") pod \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.174765 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8kmh\" (UniqueName: \"kubernetes.io/projected/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-kube-api-access-l8kmh\") pod \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.174784 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-run-httpd\") pod \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\" (UID: \"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b\") " Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.175430 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" (UID: "c1ede7cc-6809-4219-97b4-9e3ee59f3a4b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.181353 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" (UID: "c1ede7cc-6809-4219-97b4-9e3ee59f3a4b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.182836 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-scripts" (OuterVolumeSpecName: "scripts") pod "c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" (UID: "c1ede7cc-6809-4219-97b4-9e3ee59f3a4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.185703 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-kube-api-access-l8kmh" (OuterVolumeSpecName: "kube-api-access-l8kmh") pod "c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" (UID: "c1ede7cc-6809-4219-97b4-9e3ee59f3a4b"). InnerVolumeSpecName "kube-api-access-l8kmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.207005 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" (UID: "c1ede7cc-6809-4219-97b4-9e3ee59f3a4b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.229458 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" (UID: "c1ede7cc-6809-4219-97b4-9e3ee59f3a4b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.256921 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" (UID: "c1ede7cc-6809-4219-97b4-9e3ee59f3a4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.283760 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.283792 5043 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.283802 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.283810 5043 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.283818 5043 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.283826 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8kmh\" (UniqueName: \"kubernetes.io/projected/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-kube-api-access-l8kmh\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.283835 5043 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.300298 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-config-data" (OuterVolumeSpecName: "config-data") pod "c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" (UID: "c1ede7cc-6809-4219-97b4-9e3ee59f3a4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.385984 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.632504 5043 generic.go:334] "Generic (PLEG): container finished" podID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerID="7e45ff82a2cca3ccebc7bf8ceca2a19fdf8fb5a40ba9b7935bc6b1fccf3ff32f" exitCode=0 Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.632573 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.632575 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b","Type":"ContainerDied","Data":"7e45ff82a2cca3ccebc7bf8ceca2a19fdf8fb5a40ba9b7935bc6b1fccf3ff32f"} Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.633015 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1ede7cc-6809-4219-97b4-9e3ee59f3a4b","Type":"ContainerDied","Data":"2877b8fd4f3cf5e969e265b5cccecc7cd57f09633590c7caf3f5ed3da9a943f6"} Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.633038 5043 scope.go:117] "RemoveContainer" containerID="977d1206176e416a94aeba5d6b7988ee3c2bfea2d77a8426c1c443c0fb22d9f9" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.661927 5043 scope.go:117] "RemoveContainer" containerID="32f41a9a9be696dc8ca130b2b6aa4e5e4486e4a6a768419cdffd8d7798dd1023" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.690596 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.712929 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.713749 5043 scope.go:117] "RemoveContainer" containerID="abcb23cde37c1a7fcaed743d3f855820db3c8f2272ed52026e8034c6ad2720b7" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.719678 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:35:16 crc kubenswrapper[5043]: E1125 07:35:16.720105 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerName="sg-core" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.720126 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerName="sg-core" Nov 25 07:35:16 crc kubenswrapper[5043]: E1125 07:35:16.720141 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerName="ceilometer-central-agent" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.720148 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerName="ceilometer-central-agent" Nov 25 07:35:16 crc kubenswrapper[5043]: E1125 07:35:16.720169 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerName="ceilometer-notification-agent" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.720176 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerName="ceilometer-notification-agent" Nov 25 07:35:16 crc kubenswrapper[5043]: E1125 07:35:16.720188 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerName="proxy-httpd" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.720194 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerName="proxy-httpd" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.720387 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerName="ceilometer-central-agent" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.720401 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerName="ceilometer-notification-agent" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.720423 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerName="sg-core" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.720431 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" containerName="proxy-httpd" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.722335 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.726277 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.726526 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.727359 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.727900 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.757398 5043 scope.go:117] "RemoveContainer" containerID="7e45ff82a2cca3ccebc7bf8ceca2a19fdf8fb5a40ba9b7935bc6b1fccf3ff32f" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.782519 5043 scope.go:117] "RemoveContainer" containerID="977d1206176e416a94aeba5d6b7988ee3c2bfea2d77a8426c1c443c0fb22d9f9" Nov 25 07:35:16 crc kubenswrapper[5043]: E1125 07:35:16.783093 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"977d1206176e416a94aeba5d6b7988ee3c2bfea2d77a8426c1c443c0fb22d9f9\": container with ID starting with 977d1206176e416a94aeba5d6b7988ee3c2bfea2d77a8426c1c443c0fb22d9f9 not found: ID does not exist" containerID="977d1206176e416a94aeba5d6b7988ee3c2bfea2d77a8426c1c443c0fb22d9f9" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.783122 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"977d1206176e416a94aeba5d6b7988ee3c2bfea2d77a8426c1c443c0fb22d9f9"} err="failed to get container status \"977d1206176e416a94aeba5d6b7988ee3c2bfea2d77a8426c1c443c0fb22d9f9\": rpc error: code = NotFound desc = could not find container \"977d1206176e416a94aeba5d6b7988ee3c2bfea2d77a8426c1c443c0fb22d9f9\": container with ID starting with 977d1206176e416a94aeba5d6b7988ee3c2bfea2d77a8426c1c443c0fb22d9f9 not found: ID does not exist" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.783143 5043 scope.go:117] "RemoveContainer" containerID="32f41a9a9be696dc8ca130b2b6aa4e5e4486e4a6a768419cdffd8d7798dd1023" Nov 25 07:35:16 crc kubenswrapper[5043]: E1125 07:35:16.783484 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f41a9a9be696dc8ca130b2b6aa4e5e4486e4a6a768419cdffd8d7798dd1023\": container with ID starting with 32f41a9a9be696dc8ca130b2b6aa4e5e4486e4a6a768419cdffd8d7798dd1023 not found: ID does not exist" containerID="32f41a9a9be696dc8ca130b2b6aa4e5e4486e4a6a768419cdffd8d7798dd1023" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.783577 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f41a9a9be696dc8ca130b2b6aa4e5e4486e4a6a768419cdffd8d7798dd1023"} err="failed to get container status \"32f41a9a9be696dc8ca130b2b6aa4e5e4486e4a6a768419cdffd8d7798dd1023\": rpc error: code = NotFound desc = could not find container \"32f41a9a9be696dc8ca130b2b6aa4e5e4486e4a6a768419cdffd8d7798dd1023\": container with ID starting with 32f41a9a9be696dc8ca130b2b6aa4e5e4486e4a6a768419cdffd8d7798dd1023 not found: ID does not exist" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.783631 5043 scope.go:117] "RemoveContainer" containerID="abcb23cde37c1a7fcaed743d3f855820db3c8f2272ed52026e8034c6ad2720b7" Nov 25 07:35:16 crc kubenswrapper[5043]: E1125 07:35:16.783916 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abcb23cde37c1a7fcaed743d3f855820db3c8f2272ed52026e8034c6ad2720b7\": container with ID starting with abcb23cde37c1a7fcaed743d3f855820db3c8f2272ed52026e8034c6ad2720b7 not found: ID does not exist" containerID="abcb23cde37c1a7fcaed743d3f855820db3c8f2272ed52026e8034c6ad2720b7" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.783939 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abcb23cde37c1a7fcaed743d3f855820db3c8f2272ed52026e8034c6ad2720b7"} err="failed to get container status \"abcb23cde37c1a7fcaed743d3f855820db3c8f2272ed52026e8034c6ad2720b7\": rpc error: code = NotFound desc = could not find container \"abcb23cde37c1a7fcaed743d3f855820db3c8f2272ed52026e8034c6ad2720b7\": container with ID starting with abcb23cde37c1a7fcaed743d3f855820db3c8f2272ed52026e8034c6ad2720b7 not found: ID does not exist" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.783956 5043 scope.go:117] "RemoveContainer" containerID="7e45ff82a2cca3ccebc7bf8ceca2a19fdf8fb5a40ba9b7935bc6b1fccf3ff32f" Nov 25 07:35:16 crc kubenswrapper[5043]: E1125 07:35:16.784215 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e45ff82a2cca3ccebc7bf8ceca2a19fdf8fb5a40ba9b7935bc6b1fccf3ff32f\": container with ID starting with 7e45ff82a2cca3ccebc7bf8ceca2a19fdf8fb5a40ba9b7935bc6b1fccf3ff32f not found: ID does not exist" containerID="7e45ff82a2cca3ccebc7bf8ceca2a19fdf8fb5a40ba9b7935bc6b1fccf3ff32f" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.784274 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e45ff82a2cca3ccebc7bf8ceca2a19fdf8fb5a40ba9b7935bc6b1fccf3ff32f"} err="failed to get container status \"7e45ff82a2cca3ccebc7bf8ceca2a19fdf8fb5a40ba9b7935bc6b1fccf3ff32f\": rpc error: code = NotFound desc = could not find container \"7e45ff82a2cca3ccebc7bf8ceca2a19fdf8fb5a40ba9b7935bc6b1fccf3ff32f\": container with ID starting with 7e45ff82a2cca3ccebc7bf8ceca2a19fdf8fb5a40ba9b7935bc6b1fccf3ff32f not found: ID does not exist" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.792388 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-run-httpd\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.792546 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhxkj\" (UniqueName: \"kubernetes.io/projected/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-kube-api-access-rhxkj\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.792682 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.792728 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.792806 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-config-data\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.792836 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-log-httpd\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.792936 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.792970 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-scripts\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.894911 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-scripts\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.895010 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-run-httpd\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.895104 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhxkj\" (UniqueName: \"kubernetes.io/projected/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-kube-api-access-rhxkj\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.895160 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.895194 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.895240 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-config-data\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.895270 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-log-httpd\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.895409 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.895444 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-run-httpd\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.895854 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-log-httpd\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.901095 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.901187 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-scripts\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.901208 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.905269 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.905800 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-config-data\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.911119 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhxkj\" (UniqueName: \"kubernetes.io/projected/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-kube-api-access-rhxkj\") pod \"ceilometer-0\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " pod="openstack/ceilometer-0" Nov 25 07:35:16 crc kubenswrapper[5043]: I1125 07:35:16.984882 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1ede7cc-6809-4219-97b4-9e3ee59f3a4b" path="/var/lib/kubelet/pods/c1ede7cc-6809-4219-97b4-9e3ee59f3a4b/volumes" Nov 25 07:35:17 crc kubenswrapper[5043]: I1125 07:35:17.041369 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:35:17 crc kubenswrapper[5043]: I1125 07:35:17.278157 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:35:17 crc kubenswrapper[5043]: I1125 07:35:17.278457 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:35:17 crc kubenswrapper[5043]: I1125 07:35:17.278498 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:35:17 crc kubenswrapper[5043]: I1125 07:35:17.279148 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"296a5a98c9bf0bfd6085b02df2f0073364b7097e82673a35c0ff9f12b1b73d01"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:35:17 crc kubenswrapper[5043]: I1125 07:35:17.279189 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://296a5a98c9bf0bfd6085b02df2f0073364b7097e82673a35c0ff9f12b1b73d01" gracePeriod=600 Nov 25 07:35:17 crc kubenswrapper[5043]: W1125 07:35:17.547864 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ef6d7c4_cb2c_412b_9d00_72ab077f898e.slice/crio-13905bb78a3fbc0155440ff499f09565b08e83123724d1357bf43c3e57506965 WatchSource:0}: Error finding container 13905bb78a3fbc0155440ff499f09565b08e83123724d1357bf43c3e57506965: Status 404 returned error can't find the container with id 13905bb78a3fbc0155440ff499f09565b08e83123724d1357bf43c3e57506965 Nov 25 07:35:17 crc kubenswrapper[5043]: I1125 07:35:17.549112 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:35:17 crc kubenswrapper[5043]: I1125 07:35:17.695859 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="296a5a98c9bf0bfd6085b02df2f0073364b7097e82673a35c0ff9f12b1b73d01" exitCode=0 Nov 25 07:35:17 crc kubenswrapper[5043]: I1125 07:35:17.695937 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"296a5a98c9bf0bfd6085b02df2f0073364b7097e82673a35c0ff9f12b1b73d01"} Nov 25 07:35:17 crc kubenswrapper[5043]: I1125 07:35:17.695970 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"a39678aadaf4f8799d011d172223bff66847f6049bb09f87a23b01f3ae1af7cd"} Nov 25 07:35:17 crc kubenswrapper[5043]: I1125 07:35:17.695993 5043 scope.go:117] "RemoveContainer" containerID="6f48da9589e1ae5ed9bf24bc242ace441c8f2ff30315a460e91bdc63d89d037f" Nov 25 07:35:17 crc kubenswrapper[5043]: I1125 07:35:17.712984 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ef6d7c4-cb2c-412b-9d00-72ab077f898e","Type":"ContainerStarted","Data":"13905bb78a3fbc0155440ff499f09565b08e83123724d1357bf43c3e57506965"} Nov 25 07:35:18 crc kubenswrapper[5043]: I1125 07:35:18.767223 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ef6d7c4-cb2c-412b-9d00-72ab077f898e","Type":"ContainerStarted","Data":"4b4cdbf89b518bdaedec7a32ceada9bcbadfd0c52433fb9f08456bd687911a40"} Nov 25 07:35:19 crc kubenswrapper[5043]: I1125 07:35:19.780162 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ef6d7c4-cb2c-412b-9d00-72ab077f898e","Type":"ContainerStarted","Data":"a0ceaf52c34dbc7858ba1f4bbd3b2ccc713d9fdd3710ac8f51d73e251b8da6a2"} Nov 25 07:35:19 crc kubenswrapper[5043]: I1125 07:35:19.780736 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ef6d7c4-cb2c-412b-9d00-72ab077f898e","Type":"ContainerStarted","Data":"729de5216f84026dbe8d470957c183a2866b22ed988e2f5f10462d12f7fc8771"} Nov 25 07:35:21 crc kubenswrapper[5043]: I1125 07:35:21.809045 5043 generic.go:334] "Generic (PLEG): container finished" podID="c561c664-fb47-4c58-971f-b32fe1256a9f" containerID="68bb4309c19c6454334f889c2400fd1b750050ea652ed311178005778246272d" exitCode=0 Nov 25 07:35:21 crc kubenswrapper[5043]: I1125 07:35:21.809668 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2sd2w" event={"ID":"c561c664-fb47-4c58-971f-b32fe1256a9f","Type":"ContainerDied","Data":"68bb4309c19c6454334f889c2400fd1b750050ea652ed311178005778246272d"} Nov 25 07:35:21 crc kubenswrapper[5043]: I1125 07:35:21.813583 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ef6d7c4-cb2c-412b-9d00-72ab077f898e","Type":"ContainerStarted","Data":"a1894c31c03d69581f365fe9c9b0378aaa1c7b546f6e55080cbab7caf372258b"} Nov 25 07:35:21 crc kubenswrapper[5043]: I1125 07:35:21.814331 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 07:35:21 crc kubenswrapper[5043]: I1125 07:35:21.853762 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6626590329999997 podStartE2EDuration="5.853729637s" podCreationTimestamp="2025-11-25 07:35:16 +0000 UTC" firstStartedPulling="2025-11-25 07:35:17.550983432 +0000 UTC m=+1181.719179153" lastFinishedPulling="2025-11-25 07:35:20.742054036 +0000 UTC m=+1184.910249757" observedRunningTime="2025-11-25 07:35:21.849217267 +0000 UTC m=+1186.017412988" watchObservedRunningTime="2025-11-25 07:35:21.853729637 +0000 UTC m=+1186.021925378" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.198856 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2sd2w" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.240422 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-config-data\") pod \"c561c664-fb47-4c58-971f-b32fe1256a9f\" (UID: \"c561c664-fb47-4c58-971f-b32fe1256a9f\") " Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.240498 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zq2h\" (UniqueName: \"kubernetes.io/projected/c561c664-fb47-4c58-971f-b32fe1256a9f-kube-api-access-5zq2h\") pod \"c561c664-fb47-4c58-971f-b32fe1256a9f\" (UID: \"c561c664-fb47-4c58-971f-b32fe1256a9f\") " Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.240546 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-scripts\") pod \"c561c664-fb47-4c58-971f-b32fe1256a9f\" (UID: \"c561c664-fb47-4c58-971f-b32fe1256a9f\") " Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.240721 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-combined-ca-bundle\") pod \"c561c664-fb47-4c58-971f-b32fe1256a9f\" (UID: \"c561c664-fb47-4c58-971f-b32fe1256a9f\") " Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.247953 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c561c664-fb47-4c58-971f-b32fe1256a9f-kube-api-access-5zq2h" (OuterVolumeSpecName: "kube-api-access-5zq2h") pod "c561c664-fb47-4c58-971f-b32fe1256a9f" (UID: "c561c664-fb47-4c58-971f-b32fe1256a9f"). InnerVolumeSpecName "kube-api-access-5zq2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.263712 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-scripts" (OuterVolumeSpecName: "scripts") pod "c561c664-fb47-4c58-971f-b32fe1256a9f" (UID: "c561c664-fb47-4c58-971f-b32fe1256a9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.267590 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-config-data" (OuterVolumeSpecName: "config-data") pod "c561c664-fb47-4c58-971f-b32fe1256a9f" (UID: "c561c664-fb47-4c58-971f-b32fe1256a9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.270777 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c561c664-fb47-4c58-971f-b32fe1256a9f" (UID: "c561c664-fb47-4c58-971f-b32fe1256a9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.342470 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.342500 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zq2h\" (UniqueName: \"kubernetes.io/projected/c561c664-fb47-4c58-971f-b32fe1256a9f-kube-api-access-5zq2h\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.342514 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.342525 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c561c664-fb47-4c58-971f-b32fe1256a9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.836284 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2sd2w" event={"ID":"c561c664-fb47-4c58-971f-b32fe1256a9f","Type":"ContainerDied","Data":"8835ae1db8e982c96d99e0b8a5710c0fe2a457f39e03c66977cd791f081a34d2"} Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.836345 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8835ae1db8e982c96d99e0b8a5710c0fe2a457f39e03c66977cd791f081a34d2" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.836368 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2sd2w" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.932320 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 07:35:23 crc kubenswrapper[5043]: E1125 07:35:23.932715 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c561c664-fb47-4c58-971f-b32fe1256a9f" containerName="nova-cell0-conductor-db-sync" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.932737 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c561c664-fb47-4c58-971f-b32fe1256a9f" containerName="nova-cell0-conductor-db-sync" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.932925 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="c561c664-fb47-4c58-971f-b32fe1256a9f" containerName="nova-cell0-conductor-db-sync" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.933475 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.936291 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rphjt" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.937368 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.943921 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.951994 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edfa7421-823f-4292-a033-8227024b3a40-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"edfa7421-823f-4292-a033-8227024b3a40\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.952074 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edfa7421-823f-4292-a033-8227024b3a40-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"edfa7421-823f-4292-a033-8227024b3a40\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:35:23 crc kubenswrapper[5043]: I1125 07:35:23.952121 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg59v\" (UniqueName: \"kubernetes.io/projected/edfa7421-823f-4292-a033-8227024b3a40-kube-api-access-dg59v\") pod \"nova-cell0-conductor-0\" (UID: \"edfa7421-823f-4292-a033-8227024b3a40\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:35:24 crc kubenswrapper[5043]: I1125 07:35:24.053830 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edfa7421-823f-4292-a033-8227024b3a40-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"edfa7421-823f-4292-a033-8227024b3a40\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:35:24 crc kubenswrapper[5043]: I1125 07:35:24.054586 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edfa7421-823f-4292-a033-8227024b3a40-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"edfa7421-823f-4292-a033-8227024b3a40\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:35:24 crc kubenswrapper[5043]: I1125 07:35:24.054733 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg59v\" (UniqueName: \"kubernetes.io/projected/edfa7421-823f-4292-a033-8227024b3a40-kube-api-access-dg59v\") pod \"nova-cell0-conductor-0\" (UID: \"edfa7421-823f-4292-a033-8227024b3a40\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:35:24 crc kubenswrapper[5043]: I1125 07:35:24.059599 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edfa7421-823f-4292-a033-8227024b3a40-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"edfa7421-823f-4292-a033-8227024b3a40\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:35:24 crc kubenswrapper[5043]: I1125 07:35:24.060857 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edfa7421-823f-4292-a033-8227024b3a40-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"edfa7421-823f-4292-a033-8227024b3a40\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:35:24 crc kubenswrapper[5043]: I1125 07:35:24.083785 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg59v\" (UniqueName: \"kubernetes.io/projected/edfa7421-823f-4292-a033-8227024b3a40-kube-api-access-dg59v\") pod \"nova-cell0-conductor-0\" (UID: \"edfa7421-823f-4292-a033-8227024b3a40\") " pod="openstack/nova-cell0-conductor-0" Nov 25 07:35:24 crc kubenswrapper[5043]: I1125 07:35:24.248462 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 07:35:24 crc kubenswrapper[5043]: I1125 07:35:24.756147 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 07:35:24 crc kubenswrapper[5043]: W1125 07:35:24.757232 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedfa7421_823f_4292_a033_8227024b3a40.slice/crio-aaba1730539409072dfa2e6074dad3903703fc320ee86ac77a53f6844d8612de WatchSource:0}: Error finding container aaba1730539409072dfa2e6074dad3903703fc320ee86ac77a53f6844d8612de: Status 404 returned error can't find the container with id aaba1730539409072dfa2e6074dad3903703fc320ee86ac77a53f6844d8612de Nov 25 07:35:24 crc kubenswrapper[5043]: I1125 07:35:24.847460 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"edfa7421-823f-4292-a033-8227024b3a40","Type":"ContainerStarted","Data":"aaba1730539409072dfa2e6074dad3903703fc320ee86ac77a53f6844d8612de"} Nov 25 07:35:25 crc kubenswrapper[5043]: I1125 07:35:25.865543 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"edfa7421-823f-4292-a033-8227024b3a40","Type":"ContainerStarted","Data":"3336cb089cfc1274418a39a9fa6216ecd2dd15be8591b709f1db5efe501a298e"} Nov 25 07:35:25 crc kubenswrapper[5043]: I1125 07:35:25.867343 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.281728 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.310343 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=6.310315429 podStartE2EDuration="6.310315429s" podCreationTimestamp="2025-11-25 07:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:35:25.901532096 +0000 UTC m=+1190.069727867" watchObservedRunningTime="2025-11-25 07:35:29.310315429 +0000 UTC m=+1193.478511170" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.790771 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-45pwf"] Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.792314 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-45pwf" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.796723 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.801682 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.809712 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-45pwf"] Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.850831 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-scripts\") pod \"nova-cell0-cell-mapping-45pwf\" (UID: \"2b74b2ac-677f-4271-9aea-ffc23321eb55\") " pod="openstack/nova-cell0-cell-mapping-45pwf" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.850874 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-config-data\") pod \"nova-cell0-cell-mapping-45pwf\" (UID: \"2b74b2ac-677f-4271-9aea-ffc23321eb55\") " pod="openstack/nova-cell0-cell-mapping-45pwf" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.850903 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-45pwf\" (UID: \"2b74b2ac-677f-4271-9aea-ffc23321eb55\") " pod="openstack/nova-cell0-cell-mapping-45pwf" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.851067 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz42f\" (UniqueName: \"kubernetes.io/projected/2b74b2ac-677f-4271-9aea-ffc23321eb55-kube-api-access-mz42f\") pod \"nova-cell0-cell-mapping-45pwf\" (UID: \"2b74b2ac-677f-4271-9aea-ffc23321eb55\") " pod="openstack/nova-cell0-cell-mapping-45pwf" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.952317 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz42f\" (UniqueName: \"kubernetes.io/projected/2b74b2ac-677f-4271-9aea-ffc23321eb55-kube-api-access-mz42f\") pod \"nova-cell0-cell-mapping-45pwf\" (UID: \"2b74b2ac-677f-4271-9aea-ffc23321eb55\") " pod="openstack/nova-cell0-cell-mapping-45pwf" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.952405 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-scripts\") pod \"nova-cell0-cell-mapping-45pwf\" (UID: \"2b74b2ac-677f-4271-9aea-ffc23321eb55\") " pod="openstack/nova-cell0-cell-mapping-45pwf" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.952433 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-config-data\") pod \"nova-cell0-cell-mapping-45pwf\" (UID: \"2b74b2ac-677f-4271-9aea-ffc23321eb55\") " pod="openstack/nova-cell0-cell-mapping-45pwf" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.952473 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-45pwf\" (UID: \"2b74b2ac-677f-4271-9aea-ffc23321eb55\") " pod="openstack/nova-cell0-cell-mapping-45pwf" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.958963 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-45pwf\" (UID: \"2b74b2ac-677f-4271-9aea-ffc23321eb55\") " pod="openstack/nova-cell0-cell-mapping-45pwf" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.959258 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-scripts\") pod \"nova-cell0-cell-mapping-45pwf\" (UID: \"2b74b2ac-677f-4271-9aea-ffc23321eb55\") " pod="openstack/nova-cell0-cell-mapping-45pwf" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.972674 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.973794 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.976155 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.981238 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz42f\" (UniqueName: \"kubernetes.io/projected/2b74b2ac-677f-4271-9aea-ffc23321eb55-kube-api-access-mz42f\") pod \"nova-cell0-cell-mapping-45pwf\" (UID: \"2b74b2ac-677f-4271-9aea-ffc23321eb55\") " pod="openstack/nova-cell0-cell-mapping-45pwf" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.981321 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-config-data\") pod \"nova-cell0-cell-mapping-45pwf\" (UID: \"2b74b2ac-677f-4271-9aea-ffc23321eb55\") " pod="openstack/nova-cell0-cell-mapping-45pwf" Nov 25 07:35:29 crc kubenswrapper[5043]: I1125 07:35:29.988992 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.059161 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.060519 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.070777 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.092655 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.105145 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.106898 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.114080 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.114325 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-45pwf" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.142053 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.168674 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.169891 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.178929 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.207865 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.212454 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f7bbc55bc-5zq72"] Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.213976 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.218674 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f46a0fe-167d-4888-8c04-f73c4c9da405-logs\") pod \"nova-metadata-0\" (UID: \"4f46a0fe-167d-4888-8c04-f73c4c9da405\") " pod="openstack/nova-metadata-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.218739 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j74nh\" (UniqueName: \"kubernetes.io/projected/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-kube-api-access-j74nh\") pod \"nova-scheduler-0\" (UID: \"9bea6d6f-4d39-459c-a4aa-30d61594b8d8\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.218767 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-config-data\") pod \"nova-scheduler-0\" (UID: \"9bea6d6f-4d39-459c-a4aa-30d61594b8d8\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.218906 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f46a0fe-167d-4888-8c04-f73c4c9da405-config-data\") pod \"nova-metadata-0\" (UID: \"4f46a0fe-167d-4888-8c04-f73c4c9da405\") " pod="openstack/nova-metadata-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.218976 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f46a0fe-167d-4888-8c04-f73c4c9da405-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f46a0fe-167d-4888-8c04-f73c4c9da405\") " pod="openstack/nova-metadata-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.220263 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bea6d6f-4d39-459c-a4aa-30d61594b8d8\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.220348 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8tgl\" (UniqueName: \"kubernetes.io/projected/4f46a0fe-167d-4888-8c04-f73c4c9da405-kube-api-access-c8tgl\") pod \"nova-metadata-0\" (UID: \"4f46a0fe-167d-4888-8c04-f73c4c9da405\") " pod="openstack/nova-metadata-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.221209 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f7bbc55bc-5zq72"] Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321355 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f46a0fe-167d-4888-8c04-f73c4c9da405-config-data\") pod \"nova-metadata-0\" (UID: \"4f46a0fe-167d-4888-8c04-f73c4c9da405\") " pod="openstack/nova-metadata-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321426 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-dns-svc\") pod \"dnsmasq-dns-f7bbc55bc-5zq72\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321459 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xblbr\" (UniqueName: \"kubernetes.io/projected/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-kube-api-access-xblbr\") pod \"nova-api-0\" (UID: \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\") " pod="openstack/nova-api-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321485 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lhtn\" (UniqueName: \"kubernetes.io/projected/e8ebbe31-e042-46ed-9cdf-d58522420f63-kube-api-access-2lhtn\") pod \"dnsmasq-dns-f7bbc55bc-5zq72\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321506 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321534 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f46a0fe-167d-4888-8c04-f73c4c9da405-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f46a0fe-167d-4888-8c04-f73c4c9da405\") " pod="openstack/nova-metadata-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321556 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-config\") pod \"dnsmasq-dns-f7bbc55bc-5zq72\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321579 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bea6d6f-4d39-459c-a4aa-30d61594b8d8\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321616 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-ovsdbserver-sb\") pod \"dnsmasq-dns-f7bbc55bc-5zq72\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321637 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5mgz\" (UniqueName: \"kubernetes.io/projected/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-kube-api-access-r5mgz\") pod \"nova-cell1-novncproxy-0\" (UID: \"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321662 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8tgl\" (UniqueName: \"kubernetes.io/projected/4f46a0fe-167d-4888-8c04-f73c4c9da405-kube-api-access-c8tgl\") pod \"nova-metadata-0\" (UID: \"4f46a0fe-167d-4888-8c04-f73c4c9da405\") " pod="openstack/nova-metadata-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321679 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-logs\") pod \"nova-api-0\" (UID: \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\") " pod="openstack/nova-api-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321695 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f46a0fe-167d-4888-8c04-f73c4c9da405-logs\") pod \"nova-metadata-0\" (UID: \"4f46a0fe-167d-4888-8c04-f73c4c9da405\") " pod="openstack/nova-metadata-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321716 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j74nh\" (UniqueName: \"kubernetes.io/projected/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-kube-api-access-j74nh\") pod \"nova-scheduler-0\" (UID: \"9bea6d6f-4d39-459c-a4aa-30d61594b8d8\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321733 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-config-data\") pod \"nova-scheduler-0\" (UID: \"9bea6d6f-4d39-459c-a4aa-30d61594b8d8\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321751 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\") " pod="openstack/nova-api-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321766 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-ovsdbserver-nb\") pod \"dnsmasq-dns-f7bbc55bc-5zq72\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321786 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.321837 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-config-data\") pod \"nova-api-0\" (UID: \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\") " pod="openstack/nova-api-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.332061 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f46a0fe-167d-4888-8c04-f73c4c9da405-logs\") pod \"nova-metadata-0\" (UID: \"4f46a0fe-167d-4888-8c04-f73c4c9da405\") " pod="openstack/nova-metadata-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.348535 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-config-data\") pod \"nova-scheduler-0\" (UID: \"9bea6d6f-4d39-459c-a4aa-30d61594b8d8\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.349410 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f46a0fe-167d-4888-8c04-f73c4c9da405-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f46a0fe-167d-4888-8c04-f73c4c9da405\") " pod="openstack/nova-metadata-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.350018 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bea6d6f-4d39-459c-a4aa-30d61594b8d8\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.351168 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f46a0fe-167d-4888-8c04-f73c4c9da405-config-data\") pod \"nova-metadata-0\" (UID: \"4f46a0fe-167d-4888-8c04-f73c4c9da405\") " pod="openstack/nova-metadata-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.379275 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8tgl\" (UniqueName: \"kubernetes.io/projected/4f46a0fe-167d-4888-8c04-f73c4c9da405-kube-api-access-c8tgl\") pod \"nova-metadata-0\" (UID: \"4f46a0fe-167d-4888-8c04-f73c4c9da405\") " pod="openstack/nova-metadata-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.379451 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j74nh\" (UniqueName: \"kubernetes.io/projected/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-kube-api-access-j74nh\") pod \"nova-scheduler-0\" (UID: \"9bea6d6f-4d39-459c-a4aa-30d61594b8d8\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.393016 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.409769 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.429576 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-ovsdbserver-sb\") pod \"dnsmasq-dns-f7bbc55bc-5zq72\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.429643 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5mgz\" (UniqueName: \"kubernetes.io/projected/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-kube-api-access-r5mgz\") pod \"nova-cell1-novncproxy-0\" (UID: \"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.429679 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-logs\") pod \"nova-api-0\" (UID: \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\") " pod="openstack/nova-api-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.429717 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\") " pod="openstack/nova-api-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.429739 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-ovsdbserver-nb\") pod \"dnsmasq-dns-f7bbc55bc-5zq72\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.429771 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.429844 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-config-data\") pod \"nova-api-0\" (UID: \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\") " pod="openstack/nova-api-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.429893 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-dns-svc\") pod \"dnsmasq-dns-f7bbc55bc-5zq72\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.429921 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xblbr\" (UniqueName: \"kubernetes.io/projected/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-kube-api-access-xblbr\") pod \"nova-api-0\" (UID: \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\") " pod="openstack/nova-api-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.429953 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lhtn\" (UniqueName: \"kubernetes.io/projected/e8ebbe31-e042-46ed-9cdf-d58522420f63-kube-api-access-2lhtn\") pod \"dnsmasq-dns-f7bbc55bc-5zq72\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.429972 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.430004 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-config\") pod \"dnsmasq-dns-f7bbc55bc-5zq72\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.431091 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-config\") pod \"dnsmasq-dns-f7bbc55bc-5zq72\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.431537 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-logs\") pod \"nova-api-0\" (UID: \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\") " pod="openstack/nova-api-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.432127 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-ovsdbserver-sb\") pod \"dnsmasq-dns-f7bbc55bc-5zq72\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.432732 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-dns-svc\") pod \"dnsmasq-dns-f7bbc55bc-5zq72\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.435246 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-ovsdbserver-nb\") pod \"dnsmasq-dns-f7bbc55bc-5zq72\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.438020 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.446138 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\") " pod="openstack/nova-api-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.463801 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-config-data\") pod \"nova-api-0\" (UID: \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\") " pod="openstack/nova-api-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.471759 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.483685 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lhtn\" (UniqueName: \"kubernetes.io/projected/e8ebbe31-e042-46ed-9cdf-d58522420f63-kube-api-access-2lhtn\") pod \"dnsmasq-dns-f7bbc55bc-5zq72\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.490892 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xblbr\" (UniqueName: \"kubernetes.io/projected/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-kube-api-access-xblbr\") pod \"nova-api-0\" (UID: \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\") " pod="openstack/nova-api-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.501289 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5mgz\" (UniqueName: \"kubernetes.io/projected/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-kube-api-access-r5mgz\") pod \"nova-cell1-novncproxy-0\" (UID: \"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.540996 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.600753 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.676652 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:30 crc kubenswrapper[5043]: I1125 07:35:30.919787 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-45pwf"] Nov 25 07:35:30 crc kubenswrapper[5043]: W1125 07:35:30.939020 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b74b2ac_677f_4271_9aea_ffc23321eb55.slice/crio-5b2b382520033674ed8becb24bc9ceff88f5cf8421964fab987cbebd8920bfea WatchSource:0}: Error finding container 5b2b382520033674ed8becb24bc9ceff88f5cf8421964fab987cbebd8920bfea: Status 404 returned error can't find the container with id 5b2b382520033674ed8becb24bc9ceff88f5cf8421964fab987cbebd8920bfea Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.002450 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:35:31 crc kubenswrapper[5043]: W1125 07:35:31.010796 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a71871d_f5f1_4d02_9f3b_f5c2698e2b91.slice/crio-20da11b7073cb108b08108ed7ed63767a38a7c42219cc151b6153e2c3be70d1a WatchSource:0}: Error finding container 20da11b7073cb108b08108ed7ed63767a38a7c42219cc151b6153e2c3be70d1a: Status 404 returned error can't find the container with id 20da11b7073cb108b08108ed7ed63767a38a7c42219cc151b6153e2c3be70d1a Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.045218 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.062976 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c9vc6"] Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.064796 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c9vc6" Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.067987 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.068303 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 07:35:31 crc kubenswrapper[5043]: W1125 07:35:31.070949 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bea6d6f_4d39_459c_a4aa_30d61594b8d8.slice/crio-2e4e40269f8fc9d0101a067912fbdb8ebf20f841c5ea51d7c0429bdc2a88fbdc WatchSource:0}: Error finding container 2e4e40269f8fc9d0101a067912fbdb8ebf20f841c5ea51d7c0429bdc2a88fbdc: Status 404 returned error can't find the container with id 2e4e40269f8fc9d0101a067912fbdb8ebf20f841c5ea51d7c0429bdc2a88fbdc Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.081759 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.091633 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c9vc6"] Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.256079 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c9vc6\" (UID: \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\") " pod="openstack/nova-cell1-conductor-db-sync-c9vc6" Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.256374 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-scripts\") pod \"nova-cell1-conductor-db-sync-c9vc6\" (UID: \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\") " pod="openstack/nova-cell1-conductor-db-sync-c9vc6" Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.256427 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-config-data\") pod \"nova-cell1-conductor-db-sync-c9vc6\" (UID: \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\") " pod="openstack/nova-cell1-conductor-db-sync-c9vc6" Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.256454 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9slt\" (UniqueName: \"kubernetes.io/projected/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-kube-api-access-p9slt\") pod \"nova-cell1-conductor-db-sync-c9vc6\" (UID: \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\") " pod="openstack/nova-cell1-conductor-db-sync-c9vc6" Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.259700 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:35:31 crc kubenswrapper[5043]: W1125 07:35:31.265811 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08c30c0d_59c0_4c7a_aeb7_8c9a1ce6e84b.slice/crio-2aaa4c9a5086498f1377bb6a8c50f41f2b4660d8b86dee93b8ca63f7552e6dbf WatchSource:0}: Error finding container 2aaa4c9a5086498f1377bb6a8c50f41f2b4660d8b86dee93b8ca63f7552e6dbf: Status 404 returned error can't find the container with id 2aaa4c9a5086498f1377bb6a8c50f41f2b4660d8b86dee93b8ca63f7552e6dbf Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.276412 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f7bbc55bc-5zq72"] Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.359640 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-config-data\") pod \"nova-cell1-conductor-db-sync-c9vc6\" (UID: \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\") " pod="openstack/nova-cell1-conductor-db-sync-c9vc6" Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.359954 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9slt\" (UniqueName: \"kubernetes.io/projected/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-kube-api-access-p9slt\") pod \"nova-cell1-conductor-db-sync-c9vc6\" (UID: \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\") " pod="openstack/nova-cell1-conductor-db-sync-c9vc6" Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.360086 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c9vc6\" (UID: \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\") " pod="openstack/nova-cell1-conductor-db-sync-c9vc6" Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.360157 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-scripts\") pod \"nova-cell1-conductor-db-sync-c9vc6\" (UID: \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\") " pod="openstack/nova-cell1-conductor-db-sync-c9vc6" Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.364779 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-config-data\") pod \"nova-cell1-conductor-db-sync-c9vc6\" (UID: \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\") " pod="openstack/nova-cell1-conductor-db-sync-c9vc6" Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.365209 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-scripts\") pod \"nova-cell1-conductor-db-sync-c9vc6\" (UID: \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\") " pod="openstack/nova-cell1-conductor-db-sync-c9vc6" Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.367630 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-c9vc6\" (UID: \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\") " pod="openstack/nova-cell1-conductor-db-sync-c9vc6" Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.385755 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9slt\" (UniqueName: \"kubernetes.io/projected/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-kube-api-access-p9slt\") pod \"nova-cell1-conductor-db-sync-c9vc6\" (UID: \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\") " pod="openstack/nova-cell1-conductor-db-sync-c9vc6" Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.476055 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c9vc6" Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.953918 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bea6d6f-4d39-459c-a4aa-30d61594b8d8","Type":"ContainerStarted","Data":"2e4e40269f8fc9d0101a067912fbdb8ebf20f841c5ea51d7c0429bdc2a88fbdc"} Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.960359 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f46a0fe-167d-4888-8c04-f73c4c9da405","Type":"ContainerStarted","Data":"367c4878aafdf3f98a1af1f26c1ab0aa313dde44515ec6fd7a0e474323162265"} Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.961996 5043 generic.go:334] "Generic (PLEG): container finished" podID="e8ebbe31-e042-46ed-9cdf-d58522420f63" containerID="d2ae87f6558827b3bbaad39b409090873a8c7fedb8a5a0e7a8ee099dcdb91145" exitCode=0 Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.962036 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" event={"ID":"e8ebbe31-e042-46ed-9cdf-d58522420f63","Type":"ContainerDied","Data":"d2ae87f6558827b3bbaad39b409090873a8c7fedb8a5a0e7a8ee099dcdb91145"} Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.962053 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" event={"ID":"e8ebbe31-e042-46ed-9cdf-d58522420f63","Type":"ContainerStarted","Data":"ce7e5771bf331392a82a9d90342218b39353d724176400ff49b2ca9676772cc8"} Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.979541 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91","Type":"ContainerStarted","Data":"20da11b7073cb108b08108ed7ed63767a38a7c42219cc151b6153e2c3be70d1a"} Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.982051 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b","Type":"ContainerStarted","Data":"2aaa4c9a5086498f1377bb6a8c50f41f2b4660d8b86dee93b8ca63f7552e6dbf"} Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.989907 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c9vc6"] Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.996851 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-45pwf" event={"ID":"2b74b2ac-677f-4271-9aea-ffc23321eb55","Type":"ContainerStarted","Data":"d06a2a8462a68b14d8d667e4af9abc7103e2ae3e008a1461e1588e60dfd635cf"} Nov 25 07:35:31 crc kubenswrapper[5043]: I1125 07:35:31.996893 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-45pwf" event={"ID":"2b74b2ac-677f-4271-9aea-ffc23321eb55","Type":"ContainerStarted","Data":"5b2b382520033674ed8becb24bc9ceff88f5cf8421964fab987cbebd8920bfea"} Nov 25 07:35:33 crc kubenswrapper[5043]: I1125 07:35:33.049614 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" event={"ID":"e8ebbe31-e042-46ed-9cdf-d58522420f63","Type":"ContainerStarted","Data":"23617065064bfc266cb7f42685bbdd1a717ebffb5ee876a3887c96002de3e4eb"} Nov 25 07:35:33 crc kubenswrapper[5043]: I1125 07:35:33.049732 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:33 crc kubenswrapper[5043]: I1125 07:35:33.069422 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c9vc6" event={"ID":"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e","Type":"ContainerStarted","Data":"f8066527fc9b56bbb69970e0491953a70eb49cfde7cdfffb8d3de0ff43b5e940"} Nov 25 07:35:33 crc kubenswrapper[5043]: I1125 07:35:33.069499 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c9vc6" event={"ID":"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e","Type":"ContainerStarted","Data":"32bdcc237fc03ba237a39f44e014b47c232b001e4f0546596543119502a833a6"} Nov 25 07:35:33 crc kubenswrapper[5043]: I1125 07:35:33.077511 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" podStartSLOduration=3.0774964320000002 podStartE2EDuration="3.077496432s" podCreationTimestamp="2025-11-25 07:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:35:33.074596574 +0000 UTC m=+1197.242792295" watchObservedRunningTime="2025-11-25 07:35:33.077496432 +0000 UTC m=+1197.245692153" Nov 25 07:35:33 crc kubenswrapper[5043]: I1125 07:35:33.081189 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-45pwf" podStartSLOduration=4.081179381 podStartE2EDuration="4.081179381s" podCreationTimestamp="2025-11-25 07:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:35:32.02432442 +0000 UTC m=+1196.192520141" watchObservedRunningTime="2025-11-25 07:35:33.081179381 +0000 UTC m=+1197.249375102" Nov 25 07:35:33 crc kubenswrapper[5043]: I1125 07:35:33.095574 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-c9vc6" podStartSLOduration=2.095556007 podStartE2EDuration="2.095556007s" podCreationTimestamp="2025-11-25 07:35:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:35:33.092829993 +0000 UTC m=+1197.261025724" watchObservedRunningTime="2025-11-25 07:35:33.095556007 +0000 UTC m=+1197.263751718" Nov 25 07:35:33 crc kubenswrapper[5043]: I1125 07:35:33.494491 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:35:33 crc kubenswrapper[5043]: I1125 07:35:33.504919 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:35:35 crc kubenswrapper[5043]: I1125 07:35:35.113677 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bea6d6f-4d39-459c-a4aa-30d61594b8d8","Type":"ContainerStarted","Data":"f7f85a57597530b72b09031ebfe1db45e36b6a1b4439a3de028f05d71bd51647"} Nov 25 07:35:35 crc kubenswrapper[5043]: I1125 07:35:35.117956 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f46a0fe-167d-4888-8c04-f73c4c9da405","Type":"ContainerStarted","Data":"4958e5d9e59bfef83d814aa9a2d3ec14021e16d1b70377ee9593a2c2e06d6145"} Nov 25 07:35:35 crc kubenswrapper[5043]: I1125 07:35:35.117998 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f46a0fe-167d-4888-8c04-f73c4c9da405","Type":"ContainerStarted","Data":"b2ef3724f8459384cd44f3d65ab89edbe31214938050c32280caf540f0483fa6"} Nov 25 07:35:35 crc kubenswrapper[5043]: I1125 07:35:35.118102 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4f46a0fe-167d-4888-8c04-f73c4c9da405" containerName="nova-metadata-log" containerID="cri-o://b2ef3724f8459384cd44f3d65ab89edbe31214938050c32280caf540f0483fa6" gracePeriod=30 Nov 25 07:35:35 crc kubenswrapper[5043]: I1125 07:35:35.118226 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4f46a0fe-167d-4888-8c04-f73c4c9da405" containerName="nova-metadata-metadata" containerID="cri-o://4958e5d9e59bfef83d814aa9a2d3ec14021e16d1b70377ee9593a2c2e06d6145" gracePeriod=30 Nov 25 07:35:35 crc kubenswrapper[5043]: I1125 07:35:35.126882 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91","Type":"ContainerStarted","Data":"eae5c7331d25c91d38f1de4b723e81ebd04f2cc8dfa60a3341a1c5b189182a63"} Nov 25 07:35:35 crc kubenswrapper[5043]: I1125 07:35:35.126933 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91","Type":"ContainerStarted","Data":"72f1fefb8497a44da1f16c0a827b34af7da36520d60d99d86fe0f7a3ab0003b6"} Nov 25 07:35:35 crc kubenswrapper[5043]: I1125 07:35:35.132886 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b","Type":"ContainerStarted","Data":"83124eaa7a1c3d1c7c141cd68742a8c64a2c15c28e5255594beb9adafd761587"} Nov 25 07:35:35 crc kubenswrapper[5043]: I1125 07:35:35.133222 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://83124eaa7a1c3d1c7c141cd68742a8c64a2c15c28e5255594beb9adafd761587" gracePeriod=30 Nov 25 07:35:35 crc kubenswrapper[5043]: I1125 07:35:35.140194 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.920116145 podStartE2EDuration="6.140176737s" podCreationTimestamp="2025-11-25 07:35:29 +0000 UTC" firstStartedPulling="2025-11-25 07:35:31.072900566 +0000 UTC m=+1195.241096287" lastFinishedPulling="2025-11-25 07:35:34.292961158 +0000 UTC m=+1198.461156879" observedRunningTime="2025-11-25 07:35:35.140056824 +0000 UTC m=+1199.308252545" watchObservedRunningTime="2025-11-25 07:35:35.140176737 +0000 UTC m=+1199.308372478" Nov 25 07:35:35 crc kubenswrapper[5043]: I1125 07:35:35.174915 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.896786159 podStartE2EDuration="5.174892498s" podCreationTimestamp="2025-11-25 07:35:30 +0000 UTC" firstStartedPulling="2025-11-25 07:35:31.013695158 +0000 UTC m=+1195.181890879" lastFinishedPulling="2025-11-25 07:35:34.291801467 +0000 UTC m=+1198.459997218" observedRunningTime="2025-11-25 07:35:35.164003586 +0000 UTC m=+1199.332199327" watchObservedRunningTime="2025-11-25 07:35:35.174892498 +0000 UTC m=+1199.343088219" Nov 25 07:35:35 crc kubenswrapper[5043]: I1125 07:35:35.195893 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.154121841 podStartE2EDuration="5.1958722s" podCreationTimestamp="2025-11-25 07:35:30 +0000 UTC" firstStartedPulling="2025-11-25 07:35:31.267357511 +0000 UTC m=+1195.435553232" lastFinishedPulling="2025-11-25 07:35:34.30910785 +0000 UTC m=+1198.477303591" observedRunningTime="2025-11-25 07:35:35.186039236 +0000 UTC m=+1199.354234957" watchObservedRunningTime="2025-11-25 07:35:35.1958722 +0000 UTC m=+1199.364067921" Nov 25 07:35:35 crc kubenswrapper[5043]: I1125 07:35:35.205135 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.970330273 podStartE2EDuration="5.205115609s" podCreationTimestamp="2025-11-25 07:35:30 +0000 UTC" firstStartedPulling="2025-11-25 07:35:31.07155421 +0000 UTC m=+1195.239749931" lastFinishedPulling="2025-11-25 07:35:34.306339536 +0000 UTC m=+1198.474535267" observedRunningTime="2025-11-25 07:35:35.203008172 +0000 UTC m=+1199.371203893" watchObservedRunningTime="2025-11-25 07:35:35.205115609 +0000 UTC m=+1199.373311330" Nov 25 07:35:35 crc kubenswrapper[5043]: I1125 07:35:35.394021 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 07:35:35 crc kubenswrapper[5043]: I1125 07:35:35.411201 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 07:35:35 crc kubenswrapper[5043]: I1125 07:35:35.411248 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 07:35:35 crc kubenswrapper[5043]: I1125 07:35:35.601975 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:35:36 crc kubenswrapper[5043]: I1125 07:35:36.150366 5043 generic.go:334] "Generic (PLEG): container finished" podID="4f46a0fe-167d-4888-8c04-f73c4c9da405" containerID="4958e5d9e59bfef83d814aa9a2d3ec14021e16d1b70377ee9593a2c2e06d6145" exitCode=0 Nov 25 07:35:36 crc kubenswrapper[5043]: I1125 07:35:36.150740 5043 generic.go:334] "Generic (PLEG): container finished" podID="4f46a0fe-167d-4888-8c04-f73c4c9da405" containerID="b2ef3724f8459384cd44f3d65ab89edbe31214938050c32280caf540f0483fa6" exitCode=143 Nov 25 07:35:36 crc kubenswrapper[5043]: I1125 07:35:36.151678 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f46a0fe-167d-4888-8c04-f73c4c9da405","Type":"ContainerDied","Data":"4958e5d9e59bfef83d814aa9a2d3ec14021e16d1b70377ee9593a2c2e06d6145"} Nov 25 07:35:36 crc kubenswrapper[5043]: I1125 07:35:36.151723 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f46a0fe-167d-4888-8c04-f73c4c9da405","Type":"ContainerDied","Data":"b2ef3724f8459384cd44f3d65ab89edbe31214938050c32280caf540f0483fa6"} Nov 25 07:35:36 crc kubenswrapper[5043]: I1125 07:35:36.379285 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:35:36 crc kubenswrapper[5043]: I1125 07:35:36.478202 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f46a0fe-167d-4888-8c04-f73c4c9da405-config-data\") pod \"4f46a0fe-167d-4888-8c04-f73c4c9da405\" (UID: \"4f46a0fe-167d-4888-8c04-f73c4c9da405\") " Nov 25 07:35:36 crc kubenswrapper[5043]: I1125 07:35:36.478390 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f46a0fe-167d-4888-8c04-f73c4c9da405-logs\") pod \"4f46a0fe-167d-4888-8c04-f73c4c9da405\" (UID: \"4f46a0fe-167d-4888-8c04-f73c4c9da405\") " Nov 25 07:35:36 crc kubenswrapper[5043]: I1125 07:35:36.478481 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8tgl\" (UniqueName: \"kubernetes.io/projected/4f46a0fe-167d-4888-8c04-f73c4c9da405-kube-api-access-c8tgl\") pod \"4f46a0fe-167d-4888-8c04-f73c4c9da405\" (UID: \"4f46a0fe-167d-4888-8c04-f73c4c9da405\") " Nov 25 07:35:36 crc kubenswrapper[5043]: I1125 07:35:36.478552 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f46a0fe-167d-4888-8c04-f73c4c9da405-combined-ca-bundle\") pod \"4f46a0fe-167d-4888-8c04-f73c4c9da405\" (UID: \"4f46a0fe-167d-4888-8c04-f73c4c9da405\") " Nov 25 07:35:36 crc kubenswrapper[5043]: I1125 07:35:36.478783 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f46a0fe-167d-4888-8c04-f73c4c9da405-logs" (OuterVolumeSpecName: "logs") pod "4f46a0fe-167d-4888-8c04-f73c4c9da405" (UID: "4f46a0fe-167d-4888-8c04-f73c4c9da405"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:35:36 crc kubenswrapper[5043]: I1125 07:35:36.479147 5043 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f46a0fe-167d-4888-8c04-f73c4c9da405-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:36 crc kubenswrapper[5043]: I1125 07:35:36.484326 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f46a0fe-167d-4888-8c04-f73c4c9da405-kube-api-access-c8tgl" (OuterVolumeSpecName: "kube-api-access-c8tgl") pod "4f46a0fe-167d-4888-8c04-f73c4c9da405" (UID: "4f46a0fe-167d-4888-8c04-f73c4c9da405"). InnerVolumeSpecName "kube-api-access-c8tgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:35:36 crc kubenswrapper[5043]: I1125 07:35:36.505110 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f46a0fe-167d-4888-8c04-f73c4c9da405-config-data" (OuterVolumeSpecName: "config-data") pod "4f46a0fe-167d-4888-8c04-f73c4c9da405" (UID: "4f46a0fe-167d-4888-8c04-f73c4c9da405"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:36 crc kubenswrapper[5043]: I1125 07:35:36.506766 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f46a0fe-167d-4888-8c04-f73c4c9da405-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f46a0fe-167d-4888-8c04-f73c4c9da405" (UID: "4f46a0fe-167d-4888-8c04-f73c4c9da405"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:36 crc kubenswrapper[5043]: I1125 07:35:36.580737 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f46a0fe-167d-4888-8c04-f73c4c9da405-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:36 crc kubenswrapper[5043]: I1125 07:35:36.580778 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8tgl\" (UniqueName: \"kubernetes.io/projected/4f46a0fe-167d-4888-8c04-f73c4c9da405-kube-api-access-c8tgl\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:36 crc kubenswrapper[5043]: I1125 07:35:36.580789 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f46a0fe-167d-4888-8c04-f73c4c9da405-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.160325 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.160649 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f46a0fe-167d-4888-8c04-f73c4c9da405","Type":"ContainerDied","Data":"367c4878aafdf3f98a1af1f26c1ab0aa313dde44515ec6fd7a0e474323162265"} Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.160693 5043 scope.go:117] "RemoveContainer" containerID="4958e5d9e59bfef83d814aa9a2d3ec14021e16d1b70377ee9593a2c2e06d6145" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.188291 5043 scope.go:117] "RemoveContainer" containerID="b2ef3724f8459384cd44f3d65ab89edbe31214938050c32280caf540f0483fa6" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.202783 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.213591 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.222383 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:35:37 crc kubenswrapper[5043]: E1125 07:35:37.222805 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f46a0fe-167d-4888-8c04-f73c4c9da405" containerName="nova-metadata-metadata" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.222817 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f46a0fe-167d-4888-8c04-f73c4c9da405" containerName="nova-metadata-metadata" Nov 25 07:35:37 crc kubenswrapper[5043]: E1125 07:35:37.222833 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f46a0fe-167d-4888-8c04-f73c4c9da405" containerName="nova-metadata-log" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.222839 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f46a0fe-167d-4888-8c04-f73c4c9da405" containerName="nova-metadata-log" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.223012 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f46a0fe-167d-4888-8c04-f73c4c9da405" containerName="nova-metadata-metadata" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.223027 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f46a0fe-167d-4888-8c04-f73c4c9da405" containerName="nova-metadata-log" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.224560 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.232326 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.232493 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.247447 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.308332 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " pod="openstack/nova-metadata-0" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.308639 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " pod="openstack/nova-metadata-0" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.308769 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzk9c\" (UniqueName: \"kubernetes.io/projected/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-kube-api-access-wzk9c\") pod \"nova-metadata-0\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " pod="openstack/nova-metadata-0" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.308913 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-config-data\") pod \"nova-metadata-0\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " pod="openstack/nova-metadata-0" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.309029 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-logs\") pod \"nova-metadata-0\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " pod="openstack/nova-metadata-0" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.410158 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-config-data\") pod \"nova-metadata-0\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " pod="openstack/nova-metadata-0" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.410235 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-logs\") pod \"nova-metadata-0\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " pod="openstack/nova-metadata-0" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.410289 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " pod="openstack/nova-metadata-0" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.410319 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " pod="openstack/nova-metadata-0" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.410357 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzk9c\" (UniqueName: \"kubernetes.io/projected/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-kube-api-access-wzk9c\") pod \"nova-metadata-0\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " pod="openstack/nova-metadata-0" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.410711 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-logs\") pod \"nova-metadata-0\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " pod="openstack/nova-metadata-0" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.414424 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " pod="openstack/nova-metadata-0" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.414823 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " pod="openstack/nova-metadata-0" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.423489 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-config-data\") pod \"nova-metadata-0\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " pod="openstack/nova-metadata-0" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.442406 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzk9c\" (UniqueName: \"kubernetes.io/projected/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-kube-api-access-wzk9c\") pod \"nova-metadata-0\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " pod="openstack/nova-metadata-0" Nov 25 07:35:37 crc kubenswrapper[5043]: I1125 07:35:37.549429 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:35:38 crc kubenswrapper[5043]: I1125 07:35:38.007244 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:35:38 crc kubenswrapper[5043]: I1125 07:35:38.175506 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aad7d2e2-f69f-414e-bc2f-c6614a766e0e","Type":"ContainerStarted","Data":"2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9"} Nov 25 07:35:38 crc kubenswrapper[5043]: I1125 07:35:38.175900 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aad7d2e2-f69f-414e-bc2f-c6614a766e0e","Type":"ContainerStarted","Data":"bc0d75142c8689ae62fa780a8e3cad32aa02837962b26985585d4a4040d86136"} Nov 25 07:35:38 crc kubenswrapper[5043]: I1125 07:35:38.980531 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f46a0fe-167d-4888-8c04-f73c4c9da405" path="/var/lib/kubelet/pods/4f46a0fe-167d-4888-8c04-f73c4c9da405/volumes" Nov 25 07:35:39 crc kubenswrapper[5043]: I1125 07:35:39.195129 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aad7d2e2-f69f-414e-bc2f-c6614a766e0e","Type":"ContainerStarted","Data":"4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d"} Nov 25 07:35:39 crc kubenswrapper[5043]: I1125 07:35:39.200150 5043 generic.go:334] "Generic (PLEG): container finished" podID="2b74b2ac-677f-4271-9aea-ffc23321eb55" containerID="d06a2a8462a68b14d8d667e4af9abc7103e2ae3e008a1461e1588e60dfd635cf" exitCode=0 Nov 25 07:35:39 crc kubenswrapper[5043]: I1125 07:35:39.200194 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-45pwf" event={"ID":"2b74b2ac-677f-4271-9aea-ffc23321eb55","Type":"ContainerDied","Data":"d06a2a8462a68b14d8d667e4af9abc7103e2ae3e008a1461e1588e60dfd635cf"} Nov 25 07:35:39 crc kubenswrapper[5043]: I1125 07:35:39.227587 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.227560457 podStartE2EDuration="2.227560457s" podCreationTimestamp="2025-11-25 07:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:35:39.225641096 +0000 UTC m=+1203.393836837" watchObservedRunningTime="2025-11-25 07:35:39.227560457 +0000 UTC m=+1203.395756198" Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.394571 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.445037 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.542333 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.542373 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.611696 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-45pwf" Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.678999 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.685305 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-combined-ca-bundle\") pod \"2b74b2ac-677f-4271-9aea-ffc23321eb55\" (UID: \"2b74b2ac-677f-4271-9aea-ffc23321eb55\") " Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.685379 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz42f\" (UniqueName: \"kubernetes.io/projected/2b74b2ac-677f-4271-9aea-ffc23321eb55-kube-api-access-mz42f\") pod \"2b74b2ac-677f-4271-9aea-ffc23321eb55\" (UID: \"2b74b2ac-677f-4271-9aea-ffc23321eb55\") " Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.685487 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-config-data\") pod \"2b74b2ac-677f-4271-9aea-ffc23321eb55\" (UID: \"2b74b2ac-677f-4271-9aea-ffc23321eb55\") " Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.685573 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-scripts\") pod \"2b74b2ac-677f-4271-9aea-ffc23321eb55\" (UID: \"2b74b2ac-677f-4271-9aea-ffc23321eb55\") " Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.691068 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b74b2ac-677f-4271-9aea-ffc23321eb55-kube-api-access-mz42f" (OuterVolumeSpecName: "kube-api-access-mz42f") pod "2b74b2ac-677f-4271-9aea-ffc23321eb55" (UID: "2b74b2ac-677f-4271-9aea-ffc23321eb55"). InnerVolumeSpecName "kube-api-access-mz42f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.691696 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-scripts" (OuterVolumeSpecName: "scripts") pod "2b74b2ac-677f-4271-9aea-ffc23321eb55" (UID: "2b74b2ac-677f-4271-9aea-ffc23321eb55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.737754 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-config-data" (OuterVolumeSpecName: "config-data") pod "2b74b2ac-677f-4271-9aea-ffc23321eb55" (UID: "2b74b2ac-677f-4271-9aea-ffc23321eb55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.745637 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bc89f58d7-rsnk9"] Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.745974 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" podUID="64e4603e-878b-49db-9ed3-c4980a27a768" containerName="dnsmasq-dns" containerID="cri-o://e6b6b0755f5b1b89458a8c6c97874bccc41b54b5ba3aecb45bb8b35bff6ad801" gracePeriod=10 Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.759006 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b74b2ac-677f-4271-9aea-ffc23321eb55" (UID: "2b74b2ac-677f-4271-9aea-ffc23321eb55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.789234 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.789268 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.789280 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b74b2ac-677f-4271-9aea-ffc23321eb55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:40 crc kubenswrapper[5043]: I1125 07:35:40.789295 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz42f\" (UniqueName: \"kubernetes.io/projected/2b74b2ac-677f-4271-9aea-ffc23321eb55-kube-api-access-mz42f\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.146806 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.195339 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-ovsdbserver-sb\") pod \"64e4603e-878b-49db-9ed3-c4980a27a768\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.195401 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-ovsdbserver-nb\") pod \"64e4603e-878b-49db-9ed3-c4980a27a768\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.195579 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-dns-svc\") pod \"64e4603e-878b-49db-9ed3-c4980a27a768\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.195646 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsfxw\" (UniqueName: \"kubernetes.io/projected/64e4603e-878b-49db-9ed3-c4980a27a768-kube-api-access-tsfxw\") pod \"64e4603e-878b-49db-9ed3-c4980a27a768\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.195696 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-config\") pod \"64e4603e-878b-49db-9ed3-c4980a27a768\" (UID: \"64e4603e-878b-49db-9ed3-c4980a27a768\") " Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.203662 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64e4603e-878b-49db-9ed3-c4980a27a768-kube-api-access-tsfxw" (OuterVolumeSpecName: "kube-api-access-tsfxw") pod "64e4603e-878b-49db-9ed3-c4980a27a768" (UID: "64e4603e-878b-49db-9ed3-c4980a27a768"). InnerVolumeSpecName "kube-api-access-tsfxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.226999 5043 generic.go:334] "Generic (PLEG): container finished" podID="64e4603e-878b-49db-9ed3-c4980a27a768" containerID="e6b6b0755f5b1b89458a8c6c97874bccc41b54b5ba3aecb45bb8b35bff6ad801" exitCode=0 Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.227062 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.227080 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" event={"ID":"64e4603e-878b-49db-9ed3-c4980a27a768","Type":"ContainerDied","Data":"e6b6b0755f5b1b89458a8c6c97874bccc41b54b5ba3aecb45bb8b35bff6ad801"} Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.227426 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bc89f58d7-rsnk9" event={"ID":"64e4603e-878b-49db-9ed3-c4980a27a768","Type":"ContainerDied","Data":"7db53cd15158629e49681b9c7877ae0ef0bd15d0a25a8262bea395d6d68bd47d"} Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.227456 5043 scope.go:117] "RemoveContainer" containerID="e6b6b0755f5b1b89458a8c6c97874bccc41b54b5ba3aecb45bb8b35bff6ad801" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.231238 5043 generic.go:334] "Generic (PLEG): container finished" podID="e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e" containerID="f8066527fc9b56bbb69970e0491953a70eb49cfde7cdfffb8d3de0ff43b5e940" exitCode=0 Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.231283 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c9vc6" event={"ID":"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e","Type":"ContainerDied","Data":"f8066527fc9b56bbb69970e0491953a70eb49cfde7cdfffb8d3de0ff43b5e940"} Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.235368 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-45pwf" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.235920 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-45pwf" event={"ID":"2b74b2ac-677f-4271-9aea-ffc23321eb55","Type":"ContainerDied","Data":"5b2b382520033674ed8becb24bc9ceff88f5cf8421964fab987cbebd8920bfea"} Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.235941 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b2b382520033674ed8becb24bc9ceff88f5cf8421964fab987cbebd8920bfea" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.250247 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "64e4603e-878b-49db-9ed3-c4980a27a768" (UID: "64e4603e-878b-49db-9ed3-c4980a27a768"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.254083 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "64e4603e-878b-49db-9ed3-c4980a27a768" (UID: "64e4603e-878b-49db-9ed3-c4980a27a768"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.267440 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64e4603e-878b-49db-9ed3-c4980a27a768" (UID: "64e4603e-878b-49db-9ed3-c4980a27a768"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.276373 5043 scope.go:117] "RemoveContainer" containerID="530903458d682719b1fb1bbcb60b98d3742740971db9c65339e64fcbee76d52d" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.279932 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.297954 5043 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.297993 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsfxw\" (UniqueName: \"kubernetes.io/projected/64e4603e-878b-49db-9ed3-c4980a27a768-kube-api-access-tsfxw\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.298006 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.298015 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.299801 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-config" (OuterVolumeSpecName: "config") pod "64e4603e-878b-49db-9ed3-c4980a27a768" (UID: "64e4603e-878b-49db-9ed3-c4980a27a768"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.309568 5043 scope.go:117] "RemoveContainer" containerID="e6b6b0755f5b1b89458a8c6c97874bccc41b54b5ba3aecb45bb8b35bff6ad801" Nov 25 07:35:41 crc kubenswrapper[5043]: E1125 07:35:41.309993 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6b6b0755f5b1b89458a8c6c97874bccc41b54b5ba3aecb45bb8b35bff6ad801\": container with ID starting with e6b6b0755f5b1b89458a8c6c97874bccc41b54b5ba3aecb45bb8b35bff6ad801 not found: ID does not exist" containerID="e6b6b0755f5b1b89458a8c6c97874bccc41b54b5ba3aecb45bb8b35bff6ad801" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.310088 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6b6b0755f5b1b89458a8c6c97874bccc41b54b5ba3aecb45bb8b35bff6ad801"} err="failed to get container status \"e6b6b0755f5b1b89458a8c6c97874bccc41b54b5ba3aecb45bb8b35bff6ad801\": rpc error: code = NotFound desc = could not find container \"e6b6b0755f5b1b89458a8c6c97874bccc41b54b5ba3aecb45bb8b35bff6ad801\": container with ID starting with e6b6b0755f5b1b89458a8c6c97874bccc41b54b5ba3aecb45bb8b35bff6ad801 not found: ID does not exist" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.310186 5043 scope.go:117] "RemoveContainer" containerID="530903458d682719b1fb1bbcb60b98d3742740971db9c65339e64fcbee76d52d" Nov 25 07:35:41 crc kubenswrapper[5043]: E1125 07:35:41.310497 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"530903458d682719b1fb1bbcb60b98d3742740971db9c65339e64fcbee76d52d\": container with ID starting with 530903458d682719b1fb1bbcb60b98d3742740971db9c65339e64fcbee76d52d not found: ID does not exist" containerID="530903458d682719b1fb1bbcb60b98d3742740971db9c65339e64fcbee76d52d" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.310532 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"530903458d682719b1fb1bbcb60b98d3742740971db9c65339e64fcbee76d52d"} err="failed to get container status \"530903458d682719b1fb1bbcb60b98d3742740971db9c65339e64fcbee76d52d\": rpc error: code = NotFound desc = could not find container \"530903458d682719b1fb1bbcb60b98d3742740971db9c65339e64fcbee76d52d\": container with ID starting with 530903458d682719b1fb1bbcb60b98d3742740971db9c65339e64fcbee76d52d not found: ID does not exist" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.344007 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.344205 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a71871d-f5f1-4d02-9f3b-f5c2698e2b91" containerName="nova-api-log" containerID="cri-o://72f1fefb8497a44da1f16c0a827b34af7da36520d60d99d86fe0f7a3ab0003b6" gracePeriod=30 Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.344338 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a71871d-f5f1-4d02-9f3b-f5c2698e2b91" containerName="nova-api-api" containerID="cri-o://eae5c7331d25c91d38f1de4b723e81ebd04f2cc8dfa60a3341a1c5b189182a63" gracePeriod=30 Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.347681 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0a71871d-f5f1-4d02-9f3b-f5c2698e2b91" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.178:8774/\": EOF" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.349715 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0a71871d-f5f1-4d02-9f3b-f5c2698e2b91" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.178:8774/\": EOF" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.399571 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64e4603e-878b-49db-9ed3-c4980a27a768-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.401583 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.401832 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="aad7d2e2-f69f-414e-bc2f-c6614a766e0e" containerName="nova-metadata-log" containerID="cri-o://2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9" gracePeriod=30 Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.401866 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="aad7d2e2-f69f-414e-bc2f-c6614a766e0e" containerName="nova-metadata-metadata" containerID="cri-o://4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d" gracePeriod=30 Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.556839 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bc89f58d7-rsnk9"] Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.570202 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bc89f58d7-rsnk9"] Nov 25 07:35:41 crc kubenswrapper[5043]: I1125 07:35:41.731235 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.053251 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.112080 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-logs\") pod \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.112193 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-combined-ca-bundle\") pod \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.112407 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzk9c\" (UniqueName: \"kubernetes.io/projected/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-kube-api-access-wzk9c\") pod \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.112466 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-config-data\") pod \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.112509 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-nova-metadata-tls-certs\") pod \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\" (UID: \"aad7d2e2-f69f-414e-bc2f-c6614a766e0e\") " Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.114876 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-logs" (OuterVolumeSpecName: "logs") pod "aad7d2e2-f69f-414e-bc2f-c6614a766e0e" (UID: "aad7d2e2-f69f-414e-bc2f-c6614a766e0e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.120775 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-kube-api-access-wzk9c" (OuterVolumeSpecName: "kube-api-access-wzk9c") pod "aad7d2e2-f69f-414e-bc2f-c6614a766e0e" (UID: "aad7d2e2-f69f-414e-bc2f-c6614a766e0e"). InnerVolumeSpecName "kube-api-access-wzk9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.150686 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-config-data" (OuterVolumeSpecName: "config-data") pod "aad7d2e2-f69f-414e-bc2f-c6614a766e0e" (UID: "aad7d2e2-f69f-414e-bc2f-c6614a766e0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.155661 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aad7d2e2-f69f-414e-bc2f-c6614a766e0e" (UID: "aad7d2e2-f69f-414e-bc2f-c6614a766e0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.188778 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "aad7d2e2-f69f-414e-bc2f-c6614a766e0e" (UID: "aad7d2e2-f69f-414e-bc2f-c6614a766e0e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.215049 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzk9c\" (UniqueName: \"kubernetes.io/projected/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-kube-api-access-wzk9c\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.215069 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.215079 5043 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.215086 5043 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.215096 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aad7d2e2-f69f-414e-bc2f-c6614a766e0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.244145 5043 generic.go:334] "Generic (PLEG): container finished" podID="0a71871d-f5f1-4d02-9f3b-f5c2698e2b91" containerID="72f1fefb8497a44da1f16c0a827b34af7da36520d60d99d86fe0f7a3ab0003b6" exitCode=143 Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.244244 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91","Type":"ContainerDied","Data":"72f1fefb8497a44da1f16c0a827b34af7da36520d60d99d86fe0f7a3ab0003b6"} Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.246179 5043 generic.go:334] "Generic (PLEG): container finished" podID="aad7d2e2-f69f-414e-bc2f-c6614a766e0e" containerID="4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d" exitCode=0 Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.246205 5043 generic.go:334] "Generic (PLEG): container finished" podID="aad7d2e2-f69f-414e-bc2f-c6614a766e0e" containerID="2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9" exitCode=143 Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.246228 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.246256 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aad7d2e2-f69f-414e-bc2f-c6614a766e0e","Type":"ContainerDied","Data":"4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d"} Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.246288 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aad7d2e2-f69f-414e-bc2f-c6614a766e0e","Type":"ContainerDied","Data":"2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9"} Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.246301 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aad7d2e2-f69f-414e-bc2f-c6614a766e0e","Type":"ContainerDied","Data":"bc0d75142c8689ae62fa780a8e3cad32aa02837962b26985585d4a4040d86136"} Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.246322 5043 scope.go:117] "RemoveContainer" containerID="4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.290276 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.316047 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.322649 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:35:42 crc kubenswrapper[5043]: E1125 07:35:42.323032 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad7d2e2-f69f-414e-bc2f-c6614a766e0e" containerName="nova-metadata-metadata" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.323050 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad7d2e2-f69f-414e-bc2f-c6614a766e0e" containerName="nova-metadata-metadata" Nov 25 07:35:42 crc kubenswrapper[5043]: E1125 07:35:42.323062 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e4603e-878b-49db-9ed3-c4980a27a768" containerName="init" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.323069 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e4603e-878b-49db-9ed3-c4980a27a768" containerName="init" Nov 25 07:35:42 crc kubenswrapper[5043]: E1125 07:35:42.323080 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b74b2ac-677f-4271-9aea-ffc23321eb55" containerName="nova-manage" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.323085 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b74b2ac-677f-4271-9aea-ffc23321eb55" containerName="nova-manage" Nov 25 07:35:42 crc kubenswrapper[5043]: E1125 07:35:42.323103 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aad7d2e2-f69f-414e-bc2f-c6614a766e0e" containerName="nova-metadata-log" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.323110 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="aad7d2e2-f69f-414e-bc2f-c6614a766e0e" containerName="nova-metadata-log" Nov 25 07:35:42 crc kubenswrapper[5043]: E1125 07:35:42.323119 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e4603e-878b-49db-9ed3-c4980a27a768" containerName="dnsmasq-dns" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.323125 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e4603e-878b-49db-9ed3-c4980a27a768" containerName="dnsmasq-dns" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.323276 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad7d2e2-f69f-414e-bc2f-c6614a766e0e" containerName="nova-metadata-metadata" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.323285 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="aad7d2e2-f69f-414e-bc2f-c6614a766e0e" containerName="nova-metadata-log" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.323302 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="64e4603e-878b-49db-9ed3-c4980a27a768" containerName="dnsmasq-dns" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.323313 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b74b2ac-677f-4271-9aea-ffc23321eb55" containerName="nova-manage" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.324196 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.326724 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.326796 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.329001 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.341841 5043 scope.go:117] "RemoveContainer" containerID="2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.374833 5043 scope.go:117] "RemoveContainer" containerID="4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d" Nov 25 07:35:42 crc kubenswrapper[5043]: E1125 07:35:42.375404 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d\": container with ID starting with 4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d not found: ID does not exist" containerID="4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.375433 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d"} err="failed to get container status \"4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d\": rpc error: code = NotFound desc = could not find container \"4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d\": container with ID starting with 4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d not found: ID does not exist" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.375459 5043 scope.go:117] "RemoveContainer" containerID="2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9" Nov 25 07:35:42 crc kubenswrapper[5043]: E1125 07:35:42.375810 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9\": container with ID starting with 2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9 not found: ID does not exist" containerID="2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.375836 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9"} err="failed to get container status \"2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9\": rpc error: code = NotFound desc = could not find container \"2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9\": container with ID starting with 2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9 not found: ID does not exist" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.375853 5043 scope.go:117] "RemoveContainer" containerID="4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.376029 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d"} err="failed to get container status \"4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d\": rpc error: code = NotFound desc = could not find container \"4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d\": container with ID starting with 4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d not found: ID does not exist" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.376047 5043 scope.go:117] "RemoveContainer" containerID="2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.376206 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9"} err="failed to get container status \"2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9\": rpc error: code = NotFound desc = could not find container \"2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9\": container with ID starting with 2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9 not found: ID does not exist" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.419018 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.419134 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpljz\" (UniqueName: \"kubernetes.io/projected/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-kube-api-access-jpljz\") pod \"nova-metadata-0\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.419159 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.419196 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-config-data\") pod \"nova-metadata-0\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.419288 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-logs\") pod \"nova-metadata-0\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.520716 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpljz\" (UniqueName: \"kubernetes.io/projected/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-kube-api-access-jpljz\") pod \"nova-metadata-0\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.520758 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.520780 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-config-data\") pod \"nova-metadata-0\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.520883 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-logs\") pod \"nova-metadata-0\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.520932 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.521995 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-logs\") pod \"nova-metadata-0\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.525466 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-config-data\") pod \"nova-metadata-0\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.525473 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.529259 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.542243 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpljz\" (UniqueName: \"kubernetes.io/projected/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-kube-api-access-jpljz\") pod \"nova-metadata-0\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.647245 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c9vc6" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.671796 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.724151 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-config-data\") pod \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\" (UID: \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\") " Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.724268 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-combined-ca-bundle\") pod \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\" (UID: \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\") " Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.724350 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-scripts\") pod \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\" (UID: \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\") " Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.724404 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9slt\" (UniqueName: \"kubernetes.io/projected/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-kube-api-access-p9slt\") pod \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\" (UID: \"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e\") " Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.753904 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-kube-api-access-p9slt" (OuterVolumeSpecName: "kube-api-access-p9slt") pod "e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e" (UID: "e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e"). InnerVolumeSpecName "kube-api-access-p9slt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.757727 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-scripts" (OuterVolumeSpecName: "scripts") pod "e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e" (UID: "e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.762161 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-config-data" (OuterVolumeSpecName: "config-data") pod "e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e" (UID: "e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.765325 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e" (UID: "e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.827932 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.827973 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.828018 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.828031 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9slt\" (UniqueName: \"kubernetes.io/projected/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e-kube-api-access-p9slt\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.996971 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64e4603e-878b-49db-9ed3-c4980a27a768" path="/var/lib/kubelet/pods/64e4603e-878b-49db-9ed3-c4980a27a768/volumes" Nov 25 07:35:42 crc kubenswrapper[5043]: I1125 07:35:42.998675 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aad7d2e2-f69f-414e-bc2f-c6614a766e0e" path="/var/lib/kubelet/pods/aad7d2e2-f69f-414e-bc2f-c6614a766e0e/volumes" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.131059 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.287991 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f863d4a-cbec-4cc9-b0e8-968685a1d72a","Type":"ContainerStarted","Data":"bd49e4899a491f3eb5e7dc78a53ba6974f4da0c54e3641a6d14d2ed768435fe7"} Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.291151 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-c9vc6" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.291187 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-c9vc6" event={"ID":"e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e","Type":"ContainerDied","Data":"32bdcc237fc03ba237a39f44e014b47c232b001e4f0546596543119502a833a6"} Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.291208 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32bdcc237fc03ba237a39f44e014b47c232b001e4f0546596543119502a833a6" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.291246 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9bea6d6f-4d39-459c-a4aa-30d61594b8d8" containerName="nova-scheduler-scheduler" containerID="cri-o://f7f85a57597530b72b09031ebfe1db45e36b6a1b4439a3de028f05d71bd51647" gracePeriod=30 Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.328009 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 07:35:43 crc kubenswrapper[5043]: E1125 07:35:43.328364 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e" containerName="nova-cell1-conductor-db-sync" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.328375 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e" containerName="nova-cell1-conductor-db-sync" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.328541 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e" containerName="nova-cell1-conductor-db-sync" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.329128 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.333799 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.341664 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.459555 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.459853 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx82d\" (UniqueName: \"kubernetes.io/projected/af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a-kube-api-access-sx82d\") pod \"nova-cell1-conductor-0\" (UID: \"af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.459933 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.561737 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx82d\" (UniqueName: \"kubernetes.io/projected/af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a-kube-api-access-sx82d\") pod \"nova-cell1-conductor-0\" (UID: \"af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.561958 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.562102 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.566695 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.566792 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.581421 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx82d\" (UniqueName: \"kubernetes.io/projected/af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a-kube-api-access-sx82d\") pod \"nova-cell1-conductor-0\" (UID: \"af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a\") " pod="openstack/nova-cell1-conductor-0" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.655526 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 07:35:43 crc kubenswrapper[5043]: I1125 07:35:43.946436 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 07:35:44 crc kubenswrapper[5043]: I1125 07:35:44.302928 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a","Type":"ContainerStarted","Data":"ee4480227e6d045c5ca369a914c1884975c1a9e3bfc84b03c841501de61d28d7"} Nov 25 07:35:44 crc kubenswrapper[5043]: I1125 07:35:44.303298 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a","Type":"ContainerStarted","Data":"722c2aac3518c1b136edd329df64ae4212a9379e106f33b891d3c56639557f5d"} Nov 25 07:35:44 crc kubenswrapper[5043]: I1125 07:35:44.304707 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 25 07:35:44 crc kubenswrapper[5043]: I1125 07:35:44.308326 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f863d4a-cbec-4cc9-b0e8-968685a1d72a","Type":"ContainerStarted","Data":"822e97f8b985c23cfd845805dff56cc6e65255c7dd660b12a761f110c914e012"} Nov 25 07:35:44 crc kubenswrapper[5043]: I1125 07:35:44.308366 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f863d4a-cbec-4cc9-b0e8-968685a1d72a","Type":"ContainerStarted","Data":"4493280d0f8032617e96b938485345ddf242e2e4825000068214fe95f7376a2c"} Nov 25 07:35:44 crc kubenswrapper[5043]: I1125 07:35:44.320839 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.320815212 podStartE2EDuration="1.320815212s" podCreationTimestamp="2025-11-25 07:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:35:44.319117296 +0000 UTC m=+1208.487313037" watchObservedRunningTime="2025-11-25 07:35:44.320815212 +0000 UTC m=+1208.489010953" Nov 25 07:35:44 crc kubenswrapper[5043]: I1125 07:35:44.346922 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.346895341 podStartE2EDuration="2.346895341s" podCreationTimestamp="2025-11-25 07:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:35:44.337931231 +0000 UTC m=+1208.506126952" watchObservedRunningTime="2025-11-25 07:35:44.346895341 +0000 UTC m=+1208.515091072" Nov 25 07:35:45 crc kubenswrapper[5043]: E1125 07:35:45.397980 5043 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7f85a57597530b72b09031ebfe1db45e36b6a1b4439a3de028f05d71bd51647" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 07:35:45 crc kubenswrapper[5043]: E1125 07:35:45.400054 5043 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7f85a57597530b72b09031ebfe1db45e36b6a1b4439a3de028f05d71bd51647" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 07:35:45 crc kubenswrapper[5043]: E1125 07:35:45.402745 5043 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f7f85a57597530b72b09031ebfe1db45e36b6a1b4439a3de028f05d71bd51647" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 07:35:45 crc kubenswrapper[5043]: E1125 07:35:45.402777 5043 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9bea6d6f-4d39-459c-a4aa-30d61594b8d8" containerName="nova-scheduler-scheduler" Nov 25 07:35:46 crc kubenswrapper[5043]: I1125 07:35:46.339263 5043 generic.go:334] "Generic (PLEG): container finished" podID="9bea6d6f-4d39-459c-a4aa-30d61594b8d8" containerID="f7f85a57597530b72b09031ebfe1db45e36b6a1b4439a3de028f05d71bd51647" exitCode=0 Nov 25 07:35:46 crc kubenswrapper[5043]: I1125 07:35:46.340840 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bea6d6f-4d39-459c-a4aa-30d61594b8d8","Type":"ContainerDied","Data":"f7f85a57597530b72b09031ebfe1db45e36b6a1b4439a3de028f05d71bd51647"} Nov 25 07:35:46 crc kubenswrapper[5043]: I1125 07:35:46.581724 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:35:46 crc kubenswrapper[5043]: I1125 07:35:46.729505 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-config-data\") pod \"9bea6d6f-4d39-459c-a4aa-30d61594b8d8\" (UID: \"9bea6d6f-4d39-459c-a4aa-30d61594b8d8\") " Nov 25 07:35:46 crc kubenswrapper[5043]: I1125 07:35:46.729842 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j74nh\" (UniqueName: \"kubernetes.io/projected/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-kube-api-access-j74nh\") pod \"9bea6d6f-4d39-459c-a4aa-30d61594b8d8\" (UID: \"9bea6d6f-4d39-459c-a4aa-30d61594b8d8\") " Nov 25 07:35:46 crc kubenswrapper[5043]: I1125 07:35:46.729882 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-combined-ca-bundle\") pod \"9bea6d6f-4d39-459c-a4aa-30d61594b8d8\" (UID: \"9bea6d6f-4d39-459c-a4aa-30d61594b8d8\") " Nov 25 07:35:46 crc kubenswrapper[5043]: I1125 07:35:46.737381 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-kube-api-access-j74nh" (OuterVolumeSpecName: "kube-api-access-j74nh") pod "9bea6d6f-4d39-459c-a4aa-30d61594b8d8" (UID: "9bea6d6f-4d39-459c-a4aa-30d61594b8d8"). InnerVolumeSpecName "kube-api-access-j74nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:35:46 crc kubenswrapper[5043]: I1125 07:35:46.765275 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-config-data" (OuterVolumeSpecName: "config-data") pod "9bea6d6f-4d39-459c-a4aa-30d61594b8d8" (UID: "9bea6d6f-4d39-459c-a4aa-30d61594b8d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:46 crc kubenswrapper[5043]: I1125 07:35:46.780784 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bea6d6f-4d39-459c-a4aa-30d61594b8d8" (UID: "9bea6d6f-4d39-459c-a4aa-30d61594b8d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:46 crc kubenswrapper[5043]: I1125 07:35:46.834172 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:46 crc kubenswrapper[5043]: I1125 07:35:46.834277 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j74nh\" (UniqueName: \"kubernetes.io/projected/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-kube-api-access-j74nh\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:46 crc kubenswrapper[5043]: I1125 07:35:46.834308 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bea6d6f-4d39-459c-a4aa-30d61594b8d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.054343 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.063257 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.146908 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-config-data\") pod \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\" (UID: \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\") " Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.146957 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-combined-ca-bundle\") pod \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\" (UID: \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\") " Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.147088 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xblbr\" (UniqueName: \"kubernetes.io/projected/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-kube-api-access-xblbr\") pod \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\" (UID: \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\") " Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.147164 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-logs\") pod \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\" (UID: \"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91\") " Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.149137 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-logs" (OuterVolumeSpecName: "logs") pod "0a71871d-f5f1-4d02-9f3b-f5c2698e2b91" (UID: "0a71871d-f5f1-4d02-9f3b-f5c2698e2b91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.152789 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-kube-api-access-xblbr" (OuterVolumeSpecName: "kube-api-access-xblbr") pod "0a71871d-f5f1-4d02-9f3b-f5c2698e2b91" (UID: "0a71871d-f5f1-4d02-9f3b-f5c2698e2b91"). InnerVolumeSpecName "kube-api-access-xblbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.171632 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a71871d-f5f1-4d02-9f3b-f5c2698e2b91" (UID: "0a71871d-f5f1-4d02-9f3b-f5c2698e2b91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.182084 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-config-data" (OuterVolumeSpecName: "config-data") pod "0a71871d-f5f1-4d02-9f3b-f5c2698e2b91" (UID: "0a71871d-f5f1-4d02-9f3b-f5c2698e2b91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.249031 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.249080 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.249093 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xblbr\" (UniqueName: \"kubernetes.io/projected/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-kube-api-access-xblbr\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.249102 5043 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.348040 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bea6d6f-4d39-459c-a4aa-30d61594b8d8","Type":"ContainerDied","Data":"2e4e40269f8fc9d0101a067912fbdb8ebf20f841c5ea51d7c0429bdc2a88fbdc"} Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.348114 5043 scope.go:117] "RemoveContainer" containerID="f7f85a57597530b72b09031ebfe1db45e36b6a1b4439a3de028f05d71bd51647" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.348141 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.353149 5043 generic.go:334] "Generic (PLEG): container finished" podID="0a71871d-f5f1-4d02-9f3b-f5c2698e2b91" containerID="eae5c7331d25c91d38f1de4b723e81ebd04f2cc8dfa60a3341a1c5b189182a63" exitCode=0 Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.353192 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91","Type":"ContainerDied","Data":"eae5c7331d25c91d38f1de4b723e81ebd04f2cc8dfa60a3341a1c5b189182a63"} Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.353205 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.353216 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a71871d-f5f1-4d02-9f3b-f5c2698e2b91","Type":"ContainerDied","Data":"20da11b7073cb108b08108ed7ed63767a38a7c42219cc151b6153e2c3be70d1a"} Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.386091 5043 scope.go:117] "RemoveContainer" containerID="eae5c7331d25c91d38f1de4b723e81ebd04f2cc8dfa60a3341a1c5b189182a63" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.406761 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.421232 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.428669 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.434128 5043 scope.go:117] "RemoveContainer" containerID="72f1fefb8497a44da1f16c0a827b34af7da36520d60d99d86fe0f7a3ab0003b6" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.439534 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:35:47 crc kubenswrapper[5043]: E1125 07:35:47.440062 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a71871d-f5f1-4d02-9f3b-f5c2698e2b91" containerName="nova-api-api" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.440088 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a71871d-f5f1-4d02-9f3b-f5c2698e2b91" containerName="nova-api-api" Nov 25 07:35:47 crc kubenswrapper[5043]: E1125 07:35:47.440113 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a71871d-f5f1-4d02-9f3b-f5c2698e2b91" containerName="nova-api-log" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.440126 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a71871d-f5f1-4d02-9f3b-f5c2698e2b91" containerName="nova-api-log" Nov 25 07:35:47 crc kubenswrapper[5043]: E1125 07:35:47.440149 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bea6d6f-4d39-459c-a4aa-30d61594b8d8" containerName="nova-scheduler-scheduler" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.440157 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bea6d6f-4d39-459c-a4aa-30d61594b8d8" containerName="nova-scheduler-scheduler" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.440344 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bea6d6f-4d39-459c-a4aa-30d61594b8d8" containerName="nova-scheduler-scheduler" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.440367 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a71871d-f5f1-4d02-9f3b-f5c2698e2b91" containerName="nova-api-log" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.440399 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a71871d-f5f1-4d02-9f3b-f5c2698e2b91" containerName="nova-api-api" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.441061 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.447083 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.452819 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.459837 5043 scope.go:117] "RemoveContainer" containerID="eae5c7331d25c91d38f1de4b723e81ebd04f2cc8dfa60a3341a1c5b189182a63" Nov 25 07:35:47 crc kubenswrapper[5043]: E1125 07:35:47.461197 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eae5c7331d25c91d38f1de4b723e81ebd04f2cc8dfa60a3341a1c5b189182a63\": container with ID starting with eae5c7331d25c91d38f1de4b723e81ebd04f2cc8dfa60a3341a1c5b189182a63 not found: ID does not exist" containerID="eae5c7331d25c91d38f1de4b723e81ebd04f2cc8dfa60a3341a1c5b189182a63" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.461239 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eae5c7331d25c91d38f1de4b723e81ebd04f2cc8dfa60a3341a1c5b189182a63"} err="failed to get container status \"eae5c7331d25c91d38f1de4b723e81ebd04f2cc8dfa60a3341a1c5b189182a63\": rpc error: code = NotFound desc = could not find container \"eae5c7331d25c91d38f1de4b723e81ebd04f2cc8dfa60a3341a1c5b189182a63\": container with ID starting with eae5c7331d25c91d38f1de4b723e81ebd04f2cc8dfa60a3341a1c5b189182a63 not found: ID does not exist" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.461265 5043 scope.go:117] "RemoveContainer" containerID="72f1fefb8497a44da1f16c0a827b34af7da36520d60d99d86fe0f7a3ab0003b6" Nov 25 07:35:47 crc kubenswrapper[5043]: E1125 07:35:47.465807 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72f1fefb8497a44da1f16c0a827b34af7da36520d60d99d86fe0f7a3ab0003b6\": container with ID starting with 72f1fefb8497a44da1f16c0a827b34af7da36520d60d99d86fe0f7a3ab0003b6 not found: ID does not exist" containerID="72f1fefb8497a44da1f16c0a827b34af7da36520d60d99d86fe0f7a3ab0003b6" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.465850 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72f1fefb8497a44da1f16c0a827b34af7da36520d60d99d86fe0f7a3ab0003b6"} err="failed to get container status \"72f1fefb8497a44da1f16c0a827b34af7da36520d60d99d86fe0f7a3ab0003b6\": rpc error: code = NotFound desc = could not find container \"72f1fefb8497a44da1f16c0a827b34af7da36520d60d99d86fe0f7a3ab0003b6\": container with ID starting with 72f1fefb8497a44da1f16c0a827b34af7da36520d60d99d86fe0f7a3ab0003b6 not found: ID does not exist" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.478677 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.487840 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.490164 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.495199 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.499498 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.554814 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb66a1de-9aa1-4585-9dd1-50632c46deed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cb66a1de-9aa1-4585-9dd1-50632c46deed\") " pod="openstack/nova-api-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.554854 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aee8182-b123-4afd-aa11-7b987f0e1213-config-data\") pod \"nova-scheduler-0\" (UID: \"6aee8182-b123-4afd-aa11-7b987f0e1213\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.554870 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb66a1de-9aa1-4585-9dd1-50632c46deed-logs\") pod \"nova-api-0\" (UID: \"cb66a1de-9aa1-4585-9dd1-50632c46deed\") " pod="openstack/nova-api-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.555006 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb66a1de-9aa1-4585-9dd1-50632c46deed-config-data\") pod \"nova-api-0\" (UID: \"cb66a1de-9aa1-4585-9dd1-50632c46deed\") " pod="openstack/nova-api-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.555076 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djzfz\" (UniqueName: \"kubernetes.io/projected/cb66a1de-9aa1-4585-9dd1-50632c46deed-kube-api-access-djzfz\") pod \"nova-api-0\" (UID: \"cb66a1de-9aa1-4585-9dd1-50632c46deed\") " pod="openstack/nova-api-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.555213 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmscm\" (UniqueName: \"kubernetes.io/projected/6aee8182-b123-4afd-aa11-7b987f0e1213-kube-api-access-lmscm\") pod \"nova-scheduler-0\" (UID: \"6aee8182-b123-4afd-aa11-7b987f0e1213\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.555283 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aee8182-b123-4afd-aa11-7b987f0e1213-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6aee8182-b123-4afd-aa11-7b987f0e1213\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.657104 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djzfz\" (UniqueName: \"kubernetes.io/projected/cb66a1de-9aa1-4585-9dd1-50632c46deed-kube-api-access-djzfz\") pod \"nova-api-0\" (UID: \"cb66a1de-9aa1-4585-9dd1-50632c46deed\") " pod="openstack/nova-api-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.657191 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmscm\" (UniqueName: \"kubernetes.io/projected/6aee8182-b123-4afd-aa11-7b987f0e1213-kube-api-access-lmscm\") pod \"nova-scheduler-0\" (UID: \"6aee8182-b123-4afd-aa11-7b987f0e1213\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.657229 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aee8182-b123-4afd-aa11-7b987f0e1213-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6aee8182-b123-4afd-aa11-7b987f0e1213\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.657353 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb66a1de-9aa1-4585-9dd1-50632c46deed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cb66a1de-9aa1-4585-9dd1-50632c46deed\") " pod="openstack/nova-api-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.657383 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aee8182-b123-4afd-aa11-7b987f0e1213-config-data\") pod \"nova-scheduler-0\" (UID: \"6aee8182-b123-4afd-aa11-7b987f0e1213\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.657412 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb66a1de-9aa1-4585-9dd1-50632c46deed-logs\") pod \"nova-api-0\" (UID: \"cb66a1de-9aa1-4585-9dd1-50632c46deed\") " pod="openstack/nova-api-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.657458 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb66a1de-9aa1-4585-9dd1-50632c46deed-config-data\") pod \"nova-api-0\" (UID: \"cb66a1de-9aa1-4585-9dd1-50632c46deed\") " pod="openstack/nova-api-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.661009 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aee8182-b123-4afd-aa11-7b987f0e1213-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6aee8182-b123-4afd-aa11-7b987f0e1213\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.661747 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aee8182-b123-4afd-aa11-7b987f0e1213-config-data\") pod \"nova-scheduler-0\" (UID: \"6aee8182-b123-4afd-aa11-7b987f0e1213\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.663272 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb66a1de-9aa1-4585-9dd1-50632c46deed-logs\") pod \"nova-api-0\" (UID: \"cb66a1de-9aa1-4585-9dd1-50632c46deed\") " pod="openstack/nova-api-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.663591 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb66a1de-9aa1-4585-9dd1-50632c46deed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cb66a1de-9aa1-4585-9dd1-50632c46deed\") " pod="openstack/nova-api-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.663782 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb66a1de-9aa1-4585-9dd1-50632c46deed-config-data\") pod \"nova-api-0\" (UID: \"cb66a1de-9aa1-4585-9dd1-50632c46deed\") " pod="openstack/nova-api-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.673363 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.673413 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.680643 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djzfz\" (UniqueName: \"kubernetes.io/projected/cb66a1de-9aa1-4585-9dd1-50632c46deed-kube-api-access-djzfz\") pod \"nova-api-0\" (UID: \"cb66a1de-9aa1-4585-9dd1-50632c46deed\") " pod="openstack/nova-api-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.680723 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmscm\" (UniqueName: \"kubernetes.io/projected/6aee8182-b123-4afd-aa11-7b987f0e1213-kube-api-access-lmscm\") pod \"nova-scheduler-0\" (UID: \"6aee8182-b123-4afd-aa11-7b987f0e1213\") " pod="openstack/nova-scheduler-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.766647 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:35:47 crc kubenswrapper[5043]: I1125 07:35:47.803988 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:35:48 crc kubenswrapper[5043]: I1125 07:35:48.245036 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:35:48 crc kubenswrapper[5043]: I1125 07:35:48.328258 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:35:48 crc kubenswrapper[5043]: I1125 07:35:48.376027 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb66a1de-9aa1-4585-9dd1-50632c46deed","Type":"ContainerStarted","Data":"be08e02eec4e79408f90fee7db5381701e9d247d65360032684be13f340e0bb5"} Nov 25 07:35:48 crc kubenswrapper[5043]: I1125 07:35:48.383301 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6aee8182-b123-4afd-aa11-7b987f0e1213","Type":"ContainerStarted","Data":"c67bae05faec8fcb5aa8707535dedf1c5460a9be5afa3b93dcf4ab48843b22e6"} Nov 25 07:35:48 crc kubenswrapper[5043]: I1125 07:35:48.975446 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a71871d-f5f1-4d02-9f3b-f5c2698e2b91" path="/var/lib/kubelet/pods/0a71871d-f5f1-4d02-9f3b-f5c2698e2b91/volumes" Nov 25 07:35:48 crc kubenswrapper[5043]: I1125 07:35:48.976573 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bea6d6f-4d39-459c-a4aa-30d61594b8d8" path="/var/lib/kubelet/pods/9bea6d6f-4d39-459c-a4aa-30d61594b8d8/volumes" Nov 25 07:35:49 crc kubenswrapper[5043]: I1125 07:35:49.401476 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb66a1de-9aa1-4585-9dd1-50632c46deed","Type":"ContainerStarted","Data":"09c16d9a311f90dc50642dac0426719f453e1f8a4261144242d7e6d890b0fc4f"} Nov 25 07:35:49 crc kubenswrapper[5043]: I1125 07:35:49.401847 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb66a1de-9aa1-4585-9dd1-50632c46deed","Type":"ContainerStarted","Data":"13e7ec15c79bfadd2a5326c197f3a6b107e0a2291ffd92c2c700f9cac0a091ef"} Nov 25 07:35:49 crc kubenswrapper[5043]: I1125 07:35:49.405772 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6aee8182-b123-4afd-aa11-7b987f0e1213","Type":"ContainerStarted","Data":"574e2f61fe6c28efcc1629927dd0f5849572bffb8d25c5e7c313e5a3f70c5fdb"} Nov 25 07:35:49 crc kubenswrapper[5043]: I1125 07:35:49.426563 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.426544581 podStartE2EDuration="2.426544581s" podCreationTimestamp="2025-11-25 07:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:35:49.42241046 +0000 UTC m=+1213.590606181" watchObservedRunningTime="2025-11-25 07:35:49.426544581 +0000 UTC m=+1213.594740292" Nov 25 07:35:49 crc kubenswrapper[5043]: I1125 07:35:49.456402 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.456385291 podStartE2EDuration="2.456385291s" podCreationTimestamp="2025-11-25 07:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:35:49.441372589 +0000 UTC m=+1213.609568330" watchObservedRunningTime="2025-11-25 07:35:49.456385291 +0000 UTC m=+1213.624581012" Nov 25 07:35:52 crc kubenswrapper[5043]: I1125 07:35:52.673129 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 07:35:52 crc kubenswrapper[5043]: I1125 07:35:52.673437 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 07:35:52 crc kubenswrapper[5043]: I1125 07:35:52.767060 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 07:35:53 crc kubenswrapper[5043]: I1125 07:35:53.687772 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6f863d4a-cbec-4cc9-b0e8-968685a1d72a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 07:35:53 crc kubenswrapper[5043]: I1125 07:35:53.687857 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6f863d4a-cbec-4cc9-b0e8-968685a1d72a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 07:35:53 crc kubenswrapper[5043]: I1125 07:35:53.700724 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 25 07:35:57 crc kubenswrapper[5043]: I1125 07:35:57.768020 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 07:35:57 crc kubenswrapper[5043]: I1125 07:35:57.804536 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 07:35:57 crc kubenswrapper[5043]: I1125 07:35:57.804670 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 07:35:57 crc kubenswrapper[5043]: I1125 07:35:57.826262 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 07:35:58 crc kubenswrapper[5043]: I1125 07:35:58.546801 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 07:35:58 crc kubenswrapper[5043]: I1125 07:35:58.886864 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cb66a1de-9aa1-4585-9dd1-50632c46deed" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 07:35:58 crc kubenswrapper[5043]: I1125 07:35:58.886897 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cb66a1de-9aa1-4585-9dd1-50632c46deed" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 07:36:02 crc kubenswrapper[5043]: I1125 07:36:02.678740 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 07:36:02 crc kubenswrapper[5043]: I1125 07:36:02.684172 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 07:36:02 crc kubenswrapper[5043]: I1125 07:36:02.693119 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 07:36:03 crc kubenswrapper[5043]: I1125 07:36:03.583998 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 07:36:05 crc kubenswrapper[5043]: W1125 07:36:05.199512 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaad7d2e2_f69f_414e_bc2f_c6614a766e0e.slice/crio-bc0d75142c8689ae62fa780a8e3cad32aa02837962b26985585d4a4040d86136 WatchSource:0}: Error finding container bc0d75142c8689ae62fa780a8e3cad32aa02837962b26985585d4a4040d86136: Status 404 returned error can't find the container with id bc0d75142c8689ae62fa780a8e3cad32aa02837962b26985585d4a4040d86136 Nov 25 07:36:05 crc kubenswrapper[5043]: E1125 07:36:05.199871 5043 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bea6d6f_4d39_459c_a4aa_30d61594b8d8.slice/crio-2e4e40269f8fc9d0101a067912fbdb8ebf20f841c5ea51d7c0429bdc2a88fbdc: Error finding container 2e4e40269f8fc9d0101a067912fbdb8ebf20f841c5ea51d7c0429bdc2a88fbdc: Status 404 returned error can't find the container with id 2e4e40269f8fc9d0101a067912fbdb8ebf20f841c5ea51d7c0429bdc2a88fbdc Nov 25 07:36:05 crc kubenswrapper[5043]: W1125 07:36:05.201457 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaad7d2e2_f69f_414e_bc2f_c6614a766e0e.slice/crio-2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9.scope WatchSource:0}: Error finding container 2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9: Status 404 returned error can't find the container with id 2834df91857fc325f97548fa2c43f2e8285ad09fdb4ff0d0c7a47bffb0f844a9 Nov 25 07:36:05 crc kubenswrapper[5043]: W1125 07:36:05.201824 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaad7d2e2_f69f_414e_bc2f_c6614a766e0e.slice/crio-4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d.scope WatchSource:0}: Error finding container 4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d: Status 404 returned error can't find the container with id 4833b2a23b06a38907d3319c80be192fafe3be1e8c79c5482164050d5909b00d Nov 25 07:36:05 crc kubenswrapper[5043]: E1125 07:36:05.403313 5043 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a71871d_f5f1_4d02_9f3b_f5c2698e2b91.slice/crio-conmon-eae5c7331d25c91d38f1de4b723e81ebd04f2cc8dfa60a3341a1c5b189182a63.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bea6d6f_4d39_459c_a4aa_30d61594b8d8.slice/crio-f7f85a57597530b72b09031ebfe1db45e36b6a1b4439a3de028f05d71bd51647.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a71871d_f5f1_4d02_9f3b_f5c2698e2b91.slice/crio-eae5c7331d25c91d38f1de4b723e81ebd04f2cc8dfa60a3341a1c5b189182a63.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08c30c0d_59c0_4c7a_aeb7_8c9a1ce6e84b.slice/crio-83124eaa7a1c3d1c7c141cd68742a8c64a2c15c28e5255594beb9adafd761587.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08c30c0d_59c0_4c7a_aeb7_8c9a1ce6e84b.slice/crio-conmon-83124eaa7a1c3d1c7c141cd68742a8c64a2c15c28e5255594beb9adafd761587.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bea6d6f_4d39_459c_a4aa_30d61594b8d8.slice/crio-conmon-f7f85a57597530b72b09031ebfe1db45e36b6a1b4439a3de028f05d71bd51647.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bea6d6f_4d39_459c_a4aa_30d61594b8d8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a71871d_f5f1_4d02_9f3b_f5c2698e2b91.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a71871d_f5f1_4d02_9f3b_f5c2698e2b91.slice/crio-20da11b7073cb108b08108ed7ed63767a38a7c42219cc151b6153e2c3be70d1a\": RecentStats: unable to find data in memory cache]" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.589821 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.592466 5043 generic.go:334] "Generic (PLEG): container finished" podID="08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b" containerID="83124eaa7a1c3d1c7c141cd68742a8c64a2c15c28e5255594beb9adafd761587" exitCode=137 Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.592544 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.592542 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b","Type":"ContainerDied","Data":"83124eaa7a1c3d1c7c141cd68742a8c64a2c15c28e5255594beb9adafd761587"} Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.592647 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b","Type":"ContainerDied","Data":"2aaa4c9a5086498f1377bb6a8c50f41f2b4660d8b86dee93b8ca63f7552e6dbf"} Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.592675 5043 scope.go:117] "RemoveContainer" containerID="83124eaa7a1c3d1c7c141cd68742a8c64a2c15c28e5255594beb9adafd761587" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.620623 5043 scope.go:117] "RemoveContainer" containerID="83124eaa7a1c3d1c7c141cd68742a8c64a2c15c28e5255594beb9adafd761587" Nov 25 07:36:05 crc kubenswrapper[5043]: E1125 07:36:05.621979 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83124eaa7a1c3d1c7c141cd68742a8c64a2c15c28e5255594beb9adafd761587\": container with ID starting with 83124eaa7a1c3d1c7c141cd68742a8c64a2c15c28e5255594beb9adafd761587 not found: ID does not exist" containerID="83124eaa7a1c3d1c7c141cd68742a8c64a2c15c28e5255594beb9adafd761587" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.622004 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83124eaa7a1c3d1c7c141cd68742a8c64a2c15c28e5255594beb9adafd761587"} err="failed to get container status \"83124eaa7a1c3d1c7c141cd68742a8c64a2c15c28e5255594beb9adafd761587\": rpc error: code = NotFound desc = could not find container \"83124eaa7a1c3d1c7c141cd68742a8c64a2c15c28e5255594beb9adafd761587\": container with ID starting with 83124eaa7a1c3d1c7c141cd68742a8c64a2c15c28e5255594beb9adafd761587 not found: ID does not exist" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.738447 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5mgz\" (UniqueName: \"kubernetes.io/projected/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-kube-api-access-r5mgz\") pod \"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b\" (UID: \"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b\") " Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.738580 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-combined-ca-bundle\") pod \"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b\" (UID: \"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b\") " Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.738647 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-config-data\") pod \"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b\" (UID: \"08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b\") " Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.753730 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-kube-api-access-r5mgz" (OuterVolumeSpecName: "kube-api-access-r5mgz") pod "08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b" (UID: "08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b"). InnerVolumeSpecName "kube-api-access-r5mgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.779001 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b" (UID: "08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.786581 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-config-data" (OuterVolumeSpecName: "config-data") pod "08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b" (UID: "08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.841011 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.841048 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.841059 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5mgz\" (UniqueName: \"kubernetes.io/projected/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b-kube-api-access-r5mgz\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.959442 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.973815 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.985145 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:36:05 crc kubenswrapper[5043]: E1125 07:36:05.985622 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.985646 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.985936 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.987009 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.990173 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.991286 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.991460 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 25 07:36:05 crc kubenswrapper[5043]: I1125 07:36:05.999192 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.044296 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b17d1b-5e8d-4b80-a15c-be8d4458cf6f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"13b17d1b-5e8d-4b80-a15c-be8d4458cf6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.044409 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b17d1b-5e8d-4b80-a15c-be8d4458cf6f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"13b17d1b-5e8d-4b80-a15c-be8d4458cf6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.044444 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b17d1b-5e8d-4b80-a15c-be8d4458cf6f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"13b17d1b-5e8d-4b80-a15c-be8d4458cf6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.044505 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm6sp\" (UniqueName: \"kubernetes.io/projected/13b17d1b-5e8d-4b80-a15c-be8d4458cf6f-kube-api-access-gm6sp\") pod \"nova-cell1-novncproxy-0\" (UID: \"13b17d1b-5e8d-4b80-a15c-be8d4458cf6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.044640 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b17d1b-5e8d-4b80-a15c-be8d4458cf6f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"13b17d1b-5e8d-4b80-a15c-be8d4458cf6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.145749 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm6sp\" (UniqueName: \"kubernetes.io/projected/13b17d1b-5e8d-4b80-a15c-be8d4458cf6f-kube-api-access-gm6sp\") pod \"nova-cell1-novncproxy-0\" (UID: \"13b17d1b-5e8d-4b80-a15c-be8d4458cf6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.146289 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b17d1b-5e8d-4b80-a15c-be8d4458cf6f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"13b17d1b-5e8d-4b80-a15c-be8d4458cf6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.146414 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b17d1b-5e8d-4b80-a15c-be8d4458cf6f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"13b17d1b-5e8d-4b80-a15c-be8d4458cf6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.146518 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b17d1b-5e8d-4b80-a15c-be8d4458cf6f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"13b17d1b-5e8d-4b80-a15c-be8d4458cf6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.146597 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b17d1b-5e8d-4b80-a15c-be8d4458cf6f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"13b17d1b-5e8d-4b80-a15c-be8d4458cf6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.151123 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b17d1b-5e8d-4b80-a15c-be8d4458cf6f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"13b17d1b-5e8d-4b80-a15c-be8d4458cf6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.151261 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/13b17d1b-5e8d-4b80-a15c-be8d4458cf6f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"13b17d1b-5e8d-4b80-a15c-be8d4458cf6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.152057 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b17d1b-5e8d-4b80-a15c-be8d4458cf6f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"13b17d1b-5e8d-4b80-a15c-be8d4458cf6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.152833 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b17d1b-5e8d-4b80-a15c-be8d4458cf6f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"13b17d1b-5e8d-4b80-a15c-be8d4458cf6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.162000 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm6sp\" (UniqueName: \"kubernetes.io/projected/13b17d1b-5e8d-4b80-a15c-be8d4458cf6f-kube-api-access-gm6sp\") pod \"nova-cell1-novncproxy-0\" (UID: \"13b17d1b-5e8d-4b80-a15c-be8d4458cf6f\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.302869 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.814559 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 07:36:06 crc kubenswrapper[5043]: I1125 07:36:06.979434 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b" path="/var/lib/kubelet/pods/08c30c0d-59c0-4c7a-aeb7-8c9a1ce6e84b/volumes" Nov 25 07:36:07 crc kubenswrapper[5043]: I1125 07:36:07.616195 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"13b17d1b-5e8d-4b80-a15c-be8d4458cf6f","Type":"ContainerStarted","Data":"1f7c60d7987612cff4633d209f9bcaa9506323e9eb76d326db1e3cf498036df3"} Nov 25 07:36:07 crc kubenswrapper[5043]: I1125 07:36:07.616497 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"13b17d1b-5e8d-4b80-a15c-be8d4458cf6f","Type":"ContainerStarted","Data":"7f3688d504d12f5f4a4933b4c55b007d82dcd69cc824672ba2a3dcff4a589c92"} Nov 25 07:36:07 crc kubenswrapper[5043]: I1125 07:36:07.642675 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.64264384 podStartE2EDuration="2.64264384s" podCreationTimestamp="2025-11-25 07:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:36:07.635664282 +0000 UTC m=+1231.803860043" watchObservedRunningTime="2025-11-25 07:36:07.64264384 +0000 UTC m=+1231.810839611" Nov 25 07:36:07 crc kubenswrapper[5043]: I1125 07:36:07.810353 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 07:36:07 crc kubenswrapper[5043]: I1125 07:36:07.812127 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 07:36:07 crc kubenswrapper[5043]: I1125 07:36:07.815615 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 07:36:07 crc kubenswrapper[5043]: I1125 07:36:07.818194 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 07:36:08 crc kubenswrapper[5043]: I1125 07:36:08.624805 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 07:36:08 crc kubenswrapper[5043]: I1125 07:36:08.632780 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 07:36:08 crc kubenswrapper[5043]: I1125 07:36:08.822784 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f95c456cf-t8252"] Nov 25 07:36:08 crc kubenswrapper[5043]: I1125 07:36:08.824298 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:08 crc kubenswrapper[5043]: I1125 07:36:08.839366 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f95c456cf-t8252"] Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.006527 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-dns-svc\") pod \"dnsmasq-dns-f95c456cf-t8252\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.006642 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-config\") pod \"dnsmasq-dns-f95c456cf-t8252\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.006753 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-ovsdbserver-sb\") pod \"dnsmasq-dns-f95c456cf-t8252\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.006794 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-ovsdbserver-nb\") pod \"dnsmasq-dns-f95c456cf-t8252\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.006863 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62mbg\" (UniqueName: \"kubernetes.io/projected/ac0e4200-6d85-4113-b59d-24a25fb39340-kube-api-access-62mbg\") pod \"dnsmasq-dns-f95c456cf-t8252\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.108104 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62mbg\" (UniqueName: \"kubernetes.io/projected/ac0e4200-6d85-4113-b59d-24a25fb39340-kube-api-access-62mbg\") pod \"dnsmasq-dns-f95c456cf-t8252\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.108224 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-dns-svc\") pod \"dnsmasq-dns-f95c456cf-t8252\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.108291 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-config\") pod \"dnsmasq-dns-f95c456cf-t8252\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.108407 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-ovsdbserver-sb\") pod \"dnsmasq-dns-f95c456cf-t8252\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.108452 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-ovsdbserver-nb\") pod \"dnsmasq-dns-f95c456cf-t8252\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.109535 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-ovsdbserver-nb\") pod \"dnsmasq-dns-f95c456cf-t8252\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.109815 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-config\") pod \"dnsmasq-dns-f95c456cf-t8252\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.110107 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-dns-svc\") pod \"dnsmasq-dns-f95c456cf-t8252\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.110404 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-ovsdbserver-sb\") pod \"dnsmasq-dns-f95c456cf-t8252\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.129374 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62mbg\" (UniqueName: \"kubernetes.io/projected/ac0e4200-6d85-4113-b59d-24a25fb39340-kube-api-access-62mbg\") pod \"dnsmasq-dns-f95c456cf-t8252\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.142026 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.592662 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f95c456cf-t8252"] Nov 25 07:36:09 crc kubenswrapper[5043]: I1125 07:36:09.639084 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f95c456cf-t8252" event={"ID":"ac0e4200-6d85-4113-b59d-24a25fb39340","Type":"ContainerStarted","Data":"1974239cf3b7358c0630fc2bbd350c896cfe4b07a9ae6fdfeb48bba3d4679f23"} Nov 25 07:36:10 crc kubenswrapper[5043]: I1125 07:36:10.258292 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:36:10 crc kubenswrapper[5043]: I1125 07:36:10.258670 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerName="ceilometer-central-agent" containerID="cri-o://4b4cdbf89b518bdaedec7a32ceada9bcbadfd0c52433fb9f08456bd687911a40" gracePeriod=30 Nov 25 07:36:10 crc kubenswrapper[5043]: I1125 07:36:10.258794 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerName="proxy-httpd" containerID="cri-o://a1894c31c03d69581f365fe9c9b0378aaa1c7b546f6e55080cbab7caf372258b" gracePeriod=30 Nov 25 07:36:10 crc kubenswrapper[5043]: I1125 07:36:10.258831 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerName="sg-core" containerID="cri-o://a0ceaf52c34dbc7858ba1f4bbd3b2ccc713d9fdd3710ac8f51d73e251b8da6a2" gracePeriod=30 Nov 25 07:36:10 crc kubenswrapper[5043]: I1125 07:36:10.258864 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerName="ceilometer-notification-agent" containerID="cri-o://729de5216f84026dbe8d470957c183a2866b22ed988e2f5f10462d12f7fc8771" gracePeriod=30 Nov 25 07:36:10 crc kubenswrapper[5043]: I1125 07:36:10.650710 5043 generic.go:334] "Generic (PLEG): container finished" podID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerID="a1894c31c03d69581f365fe9c9b0378aaa1c7b546f6e55080cbab7caf372258b" exitCode=0 Nov 25 07:36:10 crc kubenswrapper[5043]: I1125 07:36:10.650761 5043 generic.go:334] "Generic (PLEG): container finished" podID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerID="a0ceaf52c34dbc7858ba1f4bbd3b2ccc713d9fdd3710ac8f51d73e251b8da6a2" exitCode=2 Nov 25 07:36:10 crc kubenswrapper[5043]: I1125 07:36:10.650818 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ef6d7c4-cb2c-412b-9d00-72ab077f898e","Type":"ContainerDied","Data":"a1894c31c03d69581f365fe9c9b0378aaa1c7b546f6e55080cbab7caf372258b"} Nov 25 07:36:10 crc kubenswrapper[5043]: I1125 07:36:10.650878 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ef6d7c4-cb2c-412b-9d00-72ab077f898e","Type":"ContainerDied","Data":"a0ceaf52c34dbc7858ba1f4bbd3b2ccc713d9fdd3710ac8f51d73e251b8da6a2"} Nov 25 07:36:10 crc kubenswrapper[5043]: I1125 07:36:10.653577 5043 generic.go:334] "Generic (PLEG): container finished" podID="ac0e4200-6d85-4113-b59d-24a25fb39340" containerID="fb9d13a55233be813f959467f1e38ec06f8ce9a61950e853205103db8c47c718" exitCode=0 Nov 25 07:36:10 crc kubenswrapper[5043]: I1125 07:36:10.654136 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f95c456cf-t8252" event={"ID":"ac0e4200-6d85-4113-b59d-24a25fb39340","Type":"ContainerDied","Data":"fb9d13a55233be813f959467f1e38ec06f8ce9a61950e853205103db8c47c718"} Nov 25 07:36:11 crc kubenswrapper[5043]: I1125 07:36:11.303827 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:11 crc kubenswrapper[5043]: I1125 07:36:11.378846 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:36:11 crc kubenswrapper[5043]: I1125 07:36:11.665399 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f95c456cf-t8252" event={"ID":"ac0e4200-6d85-4113-b59d-24a25fb39340","Type":"ContainerStarted","Data":"eca358f1661ec4e0fd8e099ecf980430ebfe1ce60a9a7f056ba054739d6aa7dc"} Nov 25 07:36:11 crc kubenswrapper[5043]: I1125 07:36:11.665539 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:11 crc kubenswrapper[5043]: I1125 07:36:11.667659 5043 generic.go:334] "Generic (PLEG): container finished" podID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerID="4b4cdbf89b518bdaedec7a32ceada9bcbadfd0c52433fb9f08456bd687911a40" exitCode=0 Nov 25 07:36:11 crc kubenswrapper[5043]: I1125 07:36:11.667711 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ef6d7c4-cb2c-412b-9d00-72ab077f898e","Type":"ContainerDied","Data":"4b4cdbf89b518bdaedec7a32ceada9bcbadfd0c52433fb9f08456bd687911a40"} Nov 25 07:36:11 crc kubenswrapper[5043]: I1125 07:36:11.667844 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cb66a1de-9aa1-4585-9dd1-50632c46deed" containerName="nova-api-log" containerID="cri-o://13e7ec15c79bfadd2a5326c197f3a6b107e0a2291ffd92c2c700f9cac0a091ef" gracePeriod=30 Nov 25 07:36:11 crc kubenswrapper[5043]: I1125 07:36:11.667905 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cb66a1de-9aa1-4585-9dd1-50632c46deed" containerName="nova-api-api" containerID="cri-o://09c16d9a311f90dc50642dac0426719f453e1f8a4261144242d7e6d890b0fc4f" gracePeriod=30 Nov 25 07:36:11 crc kubenswrapper[5043]: I1125 07:36:11.696982 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f95c456cf-t8252" podStartSLOduration=3.696963727 podStartE2EDuration="3.696963727s" podCreationTimestamp="2025-11-25 07:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:36:11.695085336 +0000 UTC m=+1235.863281067" watchObservedRunningTime="2025-11-25 07:36:11.696963727 +0000 UTC m=+1235.865159448" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.087897 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.159868 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhxkj\" (UniqueName: \"kubernetes.io/projected/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-kube-api-access-rhxkj\") pod \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.160013 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-run-httpd\") pod \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.160113 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-config-data\") pod \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.160144 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-log-httpd\") pod \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.160179 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-scripts\") pod \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.160207 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-combined-ca-bundle\") pod \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.160235 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-ceilometer-tls-certs\") pod \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.160260 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-sg-core-conf-yaml\") pod \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\" (UID: \"4ef6d7c4-cb2c-412b-9d00-72ab077f898e\") " Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.160798 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4ef6d7c4-cb2c-412b-9d00-72ab077f898e" (UID: "4ef6d7c4-cb2c-412b-9d00-72ab077f898e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.160945 5043 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.164240 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4ef6d7c4-cb2c-412b-9d00-72ab077f898e" (UID: "4ef6d7c4-cb2c-412b-9d00-72ab077f898e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.169024 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-kube-api-access-rhxkj" (OuterVolumeSpecName: "kube-api-access-rhxkj") pod "4ef6d7c4-cb2c-412b-9d00-72ab077f898e" (UID: "4ef6d7c4-cb2c-412b-9d00-72ab077f898e"). InnerVolumeSpecName "kube-api-access-rhxkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.171966 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-scripts" (OuterVolumeSpecName: "scripts") pod "4ef6d7c4-cb2c-412b-9d00-72ab077f898e" (UID: "4ef6d7c4-cb2c-412b-9d00-72ab077f898e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.195953 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4ef6d7c4-cb2c-412b-9d00-72ab077f898e" (UID: "4ef6d7c4-cb2c-412b-9d00-72ab077f898e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.263132 5043 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.263163 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhxkj\" (UniqueName: \"kubernetes.io/projected/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-kube-api-access-rhxkj\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.263174 5043 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.263188 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.263320 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ef6d7c4-cb2c-412b-9d00-72ab077f898e" (UID: "4ef6d7c4-cb2c-412b-9d00-72ab077f898e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.278510 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4ef6d7c4-cb2c-412b-9d00-72ab077f898e" (UID: "4ef6d7c4-cb2c-412b-9d00-72ab077f898e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.286475 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-config-data" (OuterVolumeSpecName: "config-data") pod "4ef6d7c4-cb2c-412b-9d00-72ab077f898e" (UID: "4ef6d7c4-cb2c-412b-9d00-72ab077f898e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.365059 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.365098 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.365117 5043 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ef6d7c4-cb2c-412b-9d00-72ab077f898e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.679757 5043 generic.go:334] "Generic (PLEG): container finished" podID="cb66a1de-9aa1-4585-9dd1-50632c46deed" containerID="13e7ec15c79bfadd2a5326c197f3a6b107e0a2291ffd92c2c700f9cac0a091ef" exitCode=143 Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.679817 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb66a1de-9aa1-4585-9dd1-50632c46deed","Type":"ContainerDied","Data":"13e7ec15c79bfadd2a5326c197f3a6b107e0a2291ffd92c2c700f9cac0a091ef"} Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.683461 5043 generic.go:334] "Generic (PLEG): container finished" podID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerID="729de5216f84026dbe8d470957c183a2866b22ed988e2f5f10462d12f7fc8771" exitCode=0 Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.684810 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.685641 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ef6d7c4-cb2c-412b-9d00-72ab077f898e","Type":"ContainerDied","Data":"729de5216f84026dbe8d470957c183a2866b22ed988e2f5f10462d12f7fc8771"} Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.685686 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4ef6d7c4-cb2c-412b-9d00-72ab077f898e","Type":"ContainerDied","Data":"13905bb78a3fbc0155440ff499f09565b08e83123724d1357bf43c3e57506965"} Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.685707 5043 scope.go:117] "RemoveContainer" containerID="a1894c31c03d69581f365fe9c9b0378aaa1c7b546f6e55080cbab7caf372258b" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.731364 5043 scope.go:117] "RemoveContainer" containerID="a0ceaf52c34dbc7858ba1f4bbd3b2ccc713d9fdd3710ac8f51d73e251b8da6a2" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.749677 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.760295 5043 scope.go:117] "RemoveContainer" containerID="729de5216f84026dbe8d470957c183a2866b22ed988e2f5f10462d12f7fc8771" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.765237 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.782657 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:36:12 crc kubenswrapper[5043]: E1125 07:36:12.783240 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerName="sg-core" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.783330 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerName="sg-core" Nov 25 07:36:12 crc kubenswrapper[5043]: E1125 07:36:12.783408 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerName="proxy-httpd" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.783459 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerName="proxy-httpd" Nov 25 07:36:12 crc kubenswrapper[5043]: E1125 07:36:12.783524 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerName="ceilometer-notification-agent" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.783572 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerName="ceilometer-notification-agent" Nov 25 07:36:12 crc kubenswrapper[5043]: E1125 07:36:12.783659 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerName="ceilometer-central-agent" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.783719 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerName="ceilometer-central-agent" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.784002 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerName="sg-core" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.784076 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerName="proxy-httpd" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.784133 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerName="ceilometer-notification-agent" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.784185 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" containerName="ceilometer-central-agent" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.784080 5043 scope.go:117] "RemoveContainer" containerID="4b4cdbf89b518bdaedec7a32ceada9bcbadfd0c52433fb9f08456bd687911a40" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.785872 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.788359 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.788702 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.788818 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.796212 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.812553 5043 scope.go:117] "RemoveContainer" containerID="a1894c31c03d69581f365fe9c9b0378aaa1c7b546f6e55080cbab7caf372258b" Nov 25 07:36:12 crc kubenswrapper[5043]: E1125 07:36:12.813021 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1894c31c03d69581f365fe9c9b0378aaa1c7b546f6e55080cbab7caf372258b\": container with ID starting with a1894c31c03d69581f365fe9c9b0378aaa1c7b546f6e55080cbab7caf372258b not found: ID does not exist" containerID="a1894c31c03d69581f365fe9c9b0378aaa1c7b546f6e55080cbab7caf372258b" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.813055 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1894c31c03d69581f365fe9c9b0378aaa1c7b546f6e55080cbab7caf372258b"} err="failed to get container status \"a1894c31c03d69581f365fe9c9b0378aaa1c7b546f6e55080cbab7caf372258b\": rpc error: code = NotFound desc = could not find container \"a1894c31c03d69581f365fe9c9b0378aaa1c7b546f6e55080cbab7caf372258b\": container with ID starting with a1894c31c03d69581f365fe9c9b0378aaa1c7b546f6e55080cbab7caf372258b not found: ID does not exist" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.813077 5043 scope.go:117] "RemoveContainer" containerID="a0ceaf52c34dbc7858ba1f4bbd3b2ccc713d9fdd3710ac8f51d73e251b8da6a2" Nov 25 07:36:12 crc kubenswrapper[5043]: E1125 07:36:12.813475 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0ceaf52c34dbc7858ba1f4bbd3b2ccc713d9fdd3710ac8f51d73e251b8da6a2\": container with ID starting with a0ceaf52c34dbc7858ba1f4bbd3b2ccc713d9fdd3710ac8f51d73e251b8da6a2 not found: ID does not exist" containerID="a0ceaf52c34dbc7858ba1f4bbd3b2ccc713d9fdd3710ac8f51d73e251b8da6a2" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.813511 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0ceaf52c34dbc7858ba1f4bbd3b2ccc713d9fdd3710ac8f51d73e251b8da6a2"} err="failed to get container status \"a0ceaf52c34dbc7858ba1f4bbd3b2ccc713d9fdd3710ac8f51d73e251b8da6a2\": rpc error: code = NotFound desc = could not find container \"a0ceaf52c34dbc7858ba1f4bbd3b2ccc713d9fdd3710ac8f51d73e251b8da6a2\": container with ID starting with a0ceaf52c34dbc7858ba1f4bbd3b2ccc713d9fdd3710ac8f51d73e251b8da6a2 not found: ID does not exist" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.813530 5043 scope.go:117] "RemoveContainer" containerID="729de5216f84026dbe8d470957c183a2866b22ed988e2f5f10462d12f7fc8771" Nov 25 07:36:12 crc kubenswrapper[5043]: E1125 07:36:12.813795 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"729de5216f84026dbe8d470957c183a2866b22ed988e2f5f10462d12f7fc8771\": container with ID starting with 729de5216f84026dbe8d470957c183a2866b22ed988e2f5f10462d12f7fc8771 not found: ID does not exist" containerID="729de5216f84026dbe8d470957c183a2866b22ed988e2f5f10462d12f7fc8771" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.813821 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"729de5216f84026dbe8d470957c183a2866b22ed988e2f5f10462d12f7fc8771"} err="failed to get container status \"729de5216f84026dbe8d470957c183a2866b22ed988e2f5f10462d12f7fc8771\": rpc error: code = NotFound desc = could not find container \"729de5216f84026dbe8d470957c183a2866b22ed988e2f5f10462d12f7fc8771\": container with ID starting with 729de5216f84026dbe8d470957c183a2866b22ed988e2f5f10462d12f7fc8771 not found: ID does not exist" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.813843 5043 scope.go:117] "RemoveContainer" containerID="4b4cdbf89b518bdaedec7a32ceada9bcbadfd0c52433fb9f08456bd687911a40" Nov 25 07:36:12 crc kubenswrapper[5043]: E1125 07:36:12.814027 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b4cdbf89b518bdaedec7a32ceada9bcbadfd0c52433fb9f08456bd687911a40\": container with ID starting with 4b4cdbf89b518bdaedec7a32ceada9bcbadfd0c52433fb9f08456bd687911a40 not found: ID does not exist" containerID="4b4cdbf89b518bdaedec7a32ceada9bcbadfd0c52433fb9f08456bd687911a40" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.814048 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b4cdbf89b518bdaedec7a32ceada9bcbadfd0c52433fb9f08456bd687911a40"} err="failed to get container status \"4b4cdbf89b518bdaedec7a32ceada9bcbadfd0c52433fb9f08456bd687911a40\": rpc error: code = NotFound desc = could not find container \"4b4cdbf89b518bdaedec7a32ceada9bcbadfd0c52433fb9f08456bd687911a40\": container with ID starting with 4b4cdbf89b518bdaedec7a32ceada9bcbadfd0c52433fb9f08456bd687911a40 not found: ID does not exist" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.878733 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-run-httpd\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.878774 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-scripts\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.878804 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-config-data\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.878820 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.878843 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.878862 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.879041 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl5tr\" (UniqueName: \"kubernetes.io/projected/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-kube-api-access-xl5tr\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.879374 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-log-httpd\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.974167 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ef6d7c4-cb2c-412b-9d00-72ab077f898e" path="/var/lib/kubelet/pods/4ef6d7c4-cb2c-412b-9d00-72ab077f898e/volumes" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.980378 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-run-httpd\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.980414 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-scripts\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.980444 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-config-data\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.980462 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.980475 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.980493 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.980514 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl5tr\" (UniqueName: \"kubernetes.io/projected/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-kube-api-access-xl5tr\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.980580 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-log-httpd\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.981172 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-log-httpd\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.981449 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-run-httpd\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.985186 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-config-data\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.987497 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.989396 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.991327 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-scripts\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.993716 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:12 crc kubenswrapper[5043]: I1125 07:36:12.998463 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl5tr\" (UniqueName: \"kubernetes.io/projected/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-kube-api-access-xl5tr\") pod \"ceilometer-0\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " pod="openstack/ceilometer-0" Nov 25 07:36:13 crc kubenswrapper[5043]: I1125 07:36:13.104747 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:36:13 crc kubenswrapper[5043]: I1125 07:36:13.600615 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:36:13 crc kubenswrapper[5043]: W1125 07:36:13.612144 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb2b4f4c_68b2_4f0d_9fd2_429148fa7bfb.slice/crio-3054ac41fa4920f7f671b20e3216cb00fa50e9765a2f76bdfbc19ac23c5e3dfe WatchSource:0}: Error finding container 3054ac41fa4920f7f671b20e3216cb00fa50e9765a2f76bdfbc19ac23c5e3dfe: Status 404 returned error can't find the container with id 3054ac41fa4920f7f671b20e3216cb00fa50e9765a2f76bdfbc19ac23c5e3dfe Nov 25 07:36:13 crc kubenswrapper[5043]: I1125 07:36:13.621364 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:36:13 crc kubenswrapper[5043]: I1125 07:36:13.694951 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb","Type":"ContainerStarted","Data":"3054ac41fa4920f7f671b20e3216cb00fa50e9765a2f76bdfbc19ac23c5e3dfe"} Nov 25 07:36:14 crc kubenswrapper[5043]: I1125 07:36:14.706723 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb","Type":"ContainerStarted","Data":"b05b615b77b3cd5556c6b9ef83f2fa6a7d2b1f5597abc914d3e8056598677a4a"} Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.443700 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.526151 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb66a1de-9aa1-4585-9dd1-50632c46deed-config-data\") pod \"cb66a1de-9aa1-4585-9dd1-50632c46deed\" (UID: \"cb66a1de-9aa1-4585-9dd1-50632c46deed\") " Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.526236 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djzfz\" (UniqueName: \"kubernetes.io/projected/cb66a1de-9aa1-4585-9dd1-50632c46deed-kube-api-access-djzfz\") pod \"cb66a1de-9aa1-4585-9dd1-50632c46deed\" (UID: \"cb66a1de-9aa1-4585-9dd1-50632c46deed\") " Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.526338 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb66a1de-9aa1-4585-9dd1-50632c46deed-combined-ca-bundle\") pod \"cb66a1de-9aa1-4585-9dd1-50632c46deed\" (UID: \"cb66a1de-9aa1-4585-9dd1-50632c46deed\") " Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.526391 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb66a1de-9aa1-4585-9dd1-50632c46deed-logs\") pod \"cb66a1de-9aa1-4585-9dd1-50632c46deed\" (UID: \"cb66a1de-9aa1-4585-9dd1-50632c46deed\") " Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.527094 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb66a1de-9aa1-4585-9dd1-50632c46deed-logs" (OuterVolumeSpecName: "logs") pod "cb66a1de-9aa1-4585-9dd1-50632c46deed" (UID: "cb66a1de-9aa1-4585-9dd1-50632c46deed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.531027 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb66a1de-9aa1-4585-9dd1-50632c46deed-kube-api-access-djzfz" (OuterVolumeSpecName: "kube-api-access-djzfz") pod "cb66a1de-9aa1-4585-9dd1-50632c46deed" (UID: "cb66a1de-9aa1-4585-9dd1-50632c46deed"). InnerVolumeSpecName "kube-api-access-djzfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.552719 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb66a1de-9aa1-4585-9dd1-50632c46deed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb66a1de-9aa1-4585-9dd1-50632c46deed" (UID: "cb66a1de-9aa1-4585-9dd1-50632c46deed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.562689 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb66a1de-9aa1-4585-9dd1-50632c46deed-config-data" (OuterVolumeSpecName: "config-data") pod "cb66a1de-9aa1-4585-9dd1-50632c46deed" (UID: "cb66a1de-9aa1-4585-9dd1-50632c46deed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.628667 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb66a1de-9aa1-4585-9dd1-50632c46deed-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.628703 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djzfz\" (UniqueName: \"kubernetes.io/projected/cb66a1de-9aa1-4585-9dd1-50632c46deed-kube-api-access-djzfz\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.628715 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb66a1de-9aa1-4585-9dd1-50632c46deed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.628726 5043 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb66a1de-9aa1-4585-9dd1-50632c46deed-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.715454 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb","Type":"ContainerStarted","Data":"12de91fb841a133bbc5663704847c14469dae1d857aae013d54fcaee658ce198"} Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.716898 5043 generic.go:334] "Generic (PLEG): container finished" podID="cb66a1de-9aa1-4585-9dd1-50632c46deed" containerID="09c16d9a311f90dc50642dac0426719f453e1f8a4261144242d7e6d890b0fc4f" exitCode=0 Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.716921 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb66a1de-9aa1-4585-9dd1-50632c46deed","Type":"ContainerDied","Data":"09c16d9a311f90dc50642dac0426719f453e1f8a4261144242d7e6d890b0fc4f"} Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.716937 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb66a1de-9aa1-4585-9dd1-50632c46deed","Type":"ContainerDied","Data":"be08e02eec4e79408f90fee7db5381701e9d247d65360032684be13f340e0bb5"} Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.716953 5043 scope.go:117] "RemoveContainer" containerID="09c16d9a311f90dc50642dac0426719f453e1f8a4261144242d7e6d890b0fc4f" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.717057 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.749042 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.752057 5043 scope.go:117] "RemoveContainer" containerID="13e7ec15c79bfadd2a5326c197f3a6b107e0a2291ffd92c2c700f9cac0a091ef" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.756674 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.777811 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 07:36:15 crc kubenswrapper[5043]: E1125 07:36:15.778422 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb66a1de-9aa1-4585-9dd1-50632c46deed" containerName="nova-api-log" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.778449 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb66a1de-9aa1-4585-9dd1-50632c46deed" containerName="nova-api-log" Nov 25 07:36:15 crc kubenswrapper[5043]: E1125 07:36:15.778486 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb66a1de-9aa1-4585-9dd1-50632c46deed" containerName="nova-api-api" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.778495 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb66a1de-9aa1-4585-9dd1-50632c46deed" containerName="nova-api-api" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.778767 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb66a1de-9aa1-4585-9dd1-50632c46deed" containerName="nova-api-log" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.778800 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb66a1de-9aa1-4585-9dd1-50632c46deed" containerName="nova-api-api" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.781872 5043 scope.go:117] "RemoveContainer" containerID="09c16d9a311f90dc50642dac0426719f453e1f8a4261144242d7e6d890b0fc4f" Nov 25 07:36:15 crc kubenswrapper[5043]: E1125 07:36:15.786586 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09c16d9a311f90dc50642dac0426719f453e1f8a4261144242d7e6d890b0fc4f\": container with ID starting with 09c16d9a311f90dc50642dac0426719f453e1f8a4261144242d7e6d890b0fc4f not found: ID does not exist" containerID="09c16d9a311f90dc50642dac0426719f453e1f8a4261144242d7e6d890b0fc4f" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.786638 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c16d9a311f90dc50642dac0426719f453e1f8a4261144242d7e6d890b0fc4f"} err="failed to get container status \"09c16d9a311f90dc50642dac0426719f453e1f8a4261144242d7e6d890b0fc4f\": rpc error: code = NotFound desc = could not find container \"09c16d9a311f90dc50642dac0426719f453e1f8a4261144242d7e6d890b0fc4f\": container with ID starting with 09c16d9a311f90dc50642dac0426719f453e1f8a4261144242d7e6d890b0fc4f not found: ID does not exist" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.786662 5043 scope.go:117] "RemoveContainer" containerID="13e7ec15c79bfadd2a5326c197f3a6b107e0a2291ffd92c2c700f9cac0a091ef" Nov 25 07:36:15 crc kubenswrapper[5043]: E1125 07:36:15.786895 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13e7ec15c79bfadd2a5326c197f3a6b107e0a2291ffd92c2c700f9cac0a091ef\": container with ID starting with 13e7ec15c79bfadd2a5326c197f3a6b107e0a2291ffd92c2c700f9cac0a091ef not found: ID does not exist" containerID="13e7ec15c79bfadd2a5326c197f3a6b107e0a2291ffd92c2c700f9cac0a091ef" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.786917 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13e7ec15c79bfadd2a5326c197f3a6b107e0a2291ffd92c2c700f9cac0a091ef"} err="failed to get container status \"13e7ec15c79bfadd2a5326c197f3a6b107e0a2291ffd92c2c700f9cac0a091ef\": rpc error: code = NotFound desc = could not find container \"13e7ec15c79bfadd2a5326c197f3a6b107e0a2291ffd92c2c700f9cac0a091ef\": container with ID starting with 13e7ec15c79bfadd2a5326c197f3a6b107e0a2291ffd92c2c700f9cac0a091ef not found: ID does not exist" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.793936 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.794065 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.800953 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.801072 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.801167 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.933780 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/420a3043-443f-4d3b-97f5-6a0773e329b9-logs\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.934273 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.934333 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l52dx\" (UniqueName: \"kubernetes.io/projected/420a3043-443f-4d3b-97f5-6a0773e329b9-kube-api-access-l52dx\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.934420 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-public-tls-certs\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.934455 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:15 crc kubenswrapper[5043]: I1125 07:36:15.934546 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-config-data\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.036278 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.036348 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-config-data\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.036417 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/420a3043-443f-4d3b-97f5-6a0773e329b9-logs\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.036493 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.036525 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l52dx\" (UniqueName: \"kubernetes.io/projected/420a3043-443f-4d3b-97f5-6a0773e329b9-kube-api-access-l52dx\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.036561 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-public-tls-certs\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.037224 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/420a3043-443f-4d3b-97f5-6a0773e329b9-logs\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.040118 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.040141 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-config-data\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.042022 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-public-tls-certs\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.042491 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.052061 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l52dx\" (UniqueName: \"kubernetes.io/projected/420a3043-443f-4d3b-97f5-6a0773e329b9-kube-api-access-l52dx\") pod \"nova-api-0\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " pod="openstack/nova-api-0" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.118182 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.306396 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.356594 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.588839 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.730233 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb","Type":"ContainerStarted","Data":"f6cc27c2c350894ebf43e8ec49cc712624850c354b8782c5c64d9414af77d155"} Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.732164 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"420a3043-443f-4d3b-97f5-6a0773e329b9","Type":"ContainerStarted","Data":"c9ff6b30abc4721a91be3abe0e1bf0fd1102ce399b979fc458f26b957c4133c3"} Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.765148 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.958908 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-wzdq9"] Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.962191 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wzdq9" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.995016 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.995160 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb66a1de-9aa1-4585-9dd1-50632c46deed" path="/var/lib/kubelet/pods/cb66a1de-9aa1-4585-9dd1-50632c46deed/volumes" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.995845 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 25 07:36:16 crc kubenswrapper[5043]: I1125 07:36:16.995887 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wzdq9"] Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.063922 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-scripts\") pod \"nova-cell1-cell-mapping-wzdq9\" (UID: \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\") " pod="openstack/nova-cell1-cell-mapping-wzdq9" Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.064052 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-config-data\") pod \"nova-cell1-cell-mapping-wzdq9\" (UID: \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\") " pod="openstack/nova-cell1-cell-mapping-wzdq9" Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.064082 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw645\" (UniqueName: \"kubernetes.io/projected/bc9f9c54-9408-4021-b741-ffd7c2f49f60-kube-api-access-nw645\") pod \"nova-cell1-cell-mapping-wzdq9\" (UID: \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\") " pod="openstack/nova-cell1-cell-mapping-wzdq9" Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.064124 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wzdq9\" (UID: \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\") " pod="openstack/nova-cell1-cell-mapping-wzdq9" Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.166505 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-config-data\") pod \"nova-cell1-cell-mapping-wzdq9\" (UID: \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\") " pod="openstack/nova-cell1-cell-mapping-wzdq9" Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.166562 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw645\" (UniqueName: \"kubernetes.io/projected/bc9f9c54-9408-4021-b741-ffd7c2f49f60-kube-api-access-nw645\") pod \"nova-cell1-cell-mapping-wzdq9\" (UID: \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\") " pod="openstack/nova-cell1-cell-mapping-wzdq9" Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.166622 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wzdq9\" (UID: \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\") " pod="openstack/nova-cell1-cell-mapping-wzdq9" Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.166661 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-scripts\") pod \"nova-cell1-cell-mapping-wzdq9\" (UID: \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\") " pod="openstack/nova-cell1-cell-mapping-wzdq9" Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.175374 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wzdq9\" (UID: \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\") " pod="openstack/nova-cell1-cell-mapping-wzdq9" Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.185968 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-scripts\") pod \"nova-cell1-cell-mapping-wzdq9\" (UID: \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\") " pod="openstack/nova-cell1-cell-mapping-wzdq9" Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.186104 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw645\" (UniqueName: \"kubernetes.io/projected/bc9f9c54-9408-4021-b741-ffd7c2f49f60-kube-api-access-nw645\") pod \"nova-cell1-cell-mapping-wzdq9\" (UID: \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\") " pod="openstack/nova-cell1-cell-mapping-wzdq9" Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.189458 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-config-data\") pod \"nova-cell1-cell-mapping-wzdq9\" (UID: \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\") " pod="openstack/nova-cell1-cell-mapping-wzdq9" Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.316565 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wzdq9" Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.753626 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb","Type":"ContainerStarted","Data":"2a351b676547773703c0423766f2051adf7c611ff79575f78ce34b932e02c720"} Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.758890 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.758878 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerName="ceilometer-notification-agent" containerID="cri-o://12de91fb841a133bbc5663704847c14469dae1d857aae013d54fcaee658ce198" gracePeriod=30 Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.758811 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerName="sg-core" containerID="cri-o://f6cc27c2c350894ebf43e8ec49cc712624850c354b8782c5c64d9414af77d155" gracePeriod=30 Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.758795 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerName="proxy-httpd" containerID="cri-o://2a351b676547773703c0423766f2051adf7c611ff79575f78ce34b932e02c720" gracePeriod=30 Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.758524 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerName="ceilometer-central-agent" containerID="cri-o://b05b615b77b3cd5556c6b9ef83f2fa6a7d2b1f5597abc914d3e8056598677a4a" gracePeriod=30 Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.764016 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"420a3043-443f-4d3b-97f5-6a0773e329b9","Type":"ContainerStarted","Data":"a88e38086329fb4d0f462f2414efa3b8bbf66ad55afb6e4418665e02e0b63bf9"} Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.764050 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"420a3043-443f-4d3b-97f5-6a0773e329b9","Type":"ContainerStarted","Data":"754ed0c4b8a49e3c391f5a8d06de9a7ebd3f0c4bfcb677d73964a5a13170cca7"} Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.784705 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.053851787 podStartE2EDuration="5.784689404s" podCreationTimestamp="2025-11-25 07:36:12 +0000 UTC" firstStartedPulling="2025-11-25 07:36:13.614315419 +0000 UTC m=+1237.782511140" lastFinishedPulling="2025-11-25 07:36:17.345153026 +0000 UTC m=+1241.513348757" observedRunningTime="2025-11-25 07:36:17.78194058 +0000 UTC m=+1241.950136301" watchObservedRunningTime="2025-11-25 07:36:17.784689404 +0000 UTC m=+1241.952885125" Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.806130 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.806100041 podStartE2EDuration="2.806100041s" podCreationTimestamp="2025-11-25 07:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:36:17.79790876 +0000 UTC m=+1241.966104491" watchObservedRunningTime="2025-11-25 07:36:17.806100041 +0000 UTC m=+1241.974295772" Nov 25 07:36:17 crc kubenswrapper[5043]: I1125 07:36:17.850969 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wzdq9"] Nov 25 07:36:17 crc kubenswrapper[5043]: W1125 07:36:17.851879 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc9f9c54_9408_4021_b741_ffd7c2f49f60.slice/crio-5d299ca6576839ee960f6f608ab664cfe16824440ea983cf57bf49d62df251c7 WatchSource:0}: Error finding container 5d299ca6576839ee960f6f608ab664cfe16824440ea983cf57bf49d62df251c7: Status 404 returned error can't find the container with id 5d299ca6576839ee960f6f608ab664cfe16824440ea983cf57bf49d62df251c7 Nov 25 07:36:18 crc kubenswrapper[5043]: I1125 07:36:18.778080 5043 generic.go:334] "Generic (PLEG): container finished" podID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerID="f6cc27c2c350894ebf43e8ec49cc712624850c354b8782c5c64d9414af77d155" exitCode=2 Nov 25 07:36:18 crc kubenswrapper[5043]: I1125 07:36:18.778343 5043 generic.go:334] "Generic (PLEG): container finished" podID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerID="12de91fb841a133bbc5663704847c14469dae1d857aae013d54fcaee658ce198" exitCode=0 Nov 25 07:36:18 crc kubenswrapper[5043]: I1125 07:36:18.778155 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb","Type":"ContainerDied","Data":"f6cc27c2c350894ebf43e8ec49cc712624850c354b8782c5c64d9414af77d155"} Nov 25 07:36:18 crc kubenswrapper[5043]: I1125 07:36:18.778412 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb","Type":"ContainerDied","Data":"12de91fb841a133bbc5663704847c14469dae1d857aae013d54fcaee658ce198"} Nov 25 07:36:18 crc kubenswrapper[5043]: I1125 07:36:18.782014 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wzdq9" event={"ID":"bc9f9c54-9408-4021-b741-ffd7c2f49f60","Type":"ContainerStarted","Data":"09a4616f6b79a50e29307112f9cf545b0bca42c4532b28686880865375b7296d"} Nov 25 07:36:18 crc kubenswrapper[5043]: I1125 07:36:18.782079 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wzdq9" event={"ID":"bc9f9c54-9408-4021-b741-ffd7c2f49f60","Type":"ContainerStarted","Data":"5d299ca6576839ee960f6f608ab664cfe16824440ea983cf57bf49d62df251c7"} Nov 25 07:36:18 crc kubenswrapper[5043]: I1125 07:36:18.800943 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-wzdq9" podStartSLOduration=2.800925577 podStartE2EDuration="2.800925577s" podCreationTimestamp="2025-11-25 07:36:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:36:18.799004885 +0000 UTC m=+1242.967200626" watchObservedRunningTime="2025-11-25 07:36:18.800925577 +0000 UTC m=+1242.969121288" Nov 25 07:36:19 crc kubenswrapper[5043]: I1125 07:36:19.143804 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:36:19 crc kubenswrapper[5043]: I1125 07:36:19.210090 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f7bbc55bc-5zq72"] Nov 25 07:36:19 crc kubenswrapper[5043]: I1125 07:36:19.210285 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" podUID="e8ebbe31-e042-46ed-9cdf-d58522420f63" containerName="dnsmasq-dns" containerID="cri-o://23617065064bfc266cb7f42685bbdd1a717ebffb5ee876a3887c96002de3e4eb" gracePeriod=10 Nov 25 07:36:19 crc kubenswrapper[5043]: I1125 07:36:19.803932 5043 generic.go:334] "Generic (PLEG): container finished" podID="e8ebbe31-e042-46ed-9cdf-d58522420f63" containerID="23617065064bfc266cb7f42685bbdd1a717ebffb5ee876a3887c96002de3e4eb" exitCode=0 Nov 25 07:36:19 crc kubenswrapper[5043]: I1125 07:36:19.808328 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" event={"ID":"e8ebbe31-e042-46ed-9cdf-d58522420f63","Type":"ContainerDied","Data":"23617065064bfc266cb7f42685bbdd1a717ebffb5ee876a3887c96002de3e4eb"} Nov 25 07:36:19 crc kubenswrapper[5043]: I1125 07:36:19.939974 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.021963 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lhtn\" (UniqueName: \"kubernetes.io/projected/e8ebbe31-e042-46ed-9cdf-d58522420f63-kube-api-access-2lhtn\") pod \"e8ebbe31-e042-46ed-9cdf-d58522420f63\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.022092 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-ovsdbserver-sb\") pod \"e8ebbe31-e042-46ed-9cdf-d58522420f63\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.022193 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-config\") pod \"e8ebbe31-e042-46ed-9cdf-d58522420f63\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.022278 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-ovsdbserver-nb\") pod \"e8ebbe31-e042-46ed-9cdf-d58522420f63\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.022340 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-dns-svc\") pod \"e8ebbe31-e042-46ed-9cdf-d58522420f63\" (UID: \"e8ebbe31-e042-46ed-9cdf-d58522420f63\") " Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.046682 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8ebbe31-e042-46ed-9cdf-d58522420f63-kube-api-access-2lhtn" (OuterVolumeSpecName: "kube-api-access-2lhtn") pod "e8ebbe31-e042-46ed-9cdf-d58522420f63" (UID: "e8ebbe31-e042-46ed-9cdf-d58522420f63"). InnerVolumeSpecName "kube-api-access-2lhtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.079319 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e8ebbe31-e042-46ed-9cdf-d58522420f63" (UID: "e8ebbe31-e042-46ed-9cdf-d58522420f63"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.085521 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-config" (OuterVolumeSpecName: "config") pod "e8ebbe31-e042-46ed-9cdf-d58522420f63" (UID: "e8ebbe31-e042-46ed-9cdf-d58522420f63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.090471 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e8ebbe31-e042-46ed-9cdf-d58522420f63" (UID: "e8ebbe31-e042-46ed-9cdf-d58522420f63"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.095693 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e8ebbe31-e042-46ed-9cdf-d58522420f63" (UID: "e8ebbe31-e042-46ed-9cdf-d58522420f63"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.125680 5043 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.126019 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lhtn\" (UniqueName: \"kubernetes.io/projected/e8ebbe31-e042-46ed-9cdf-d58522420f63-kube-api-access-2lhtn\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.126104 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.126176 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.126242 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8ebbe31-e042-46ed-9cdf-d58522420f63-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.821275 5043 generic.go:334] "Generic (PLEG): container finished" podID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerID="b05b615b77b3cd5556c6b9ef83f2fa6a7d2b1f5597abc914d3e8056598677a4a" exitCode=0 Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.821383 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb","Type":"ContainerDied","Data":"b05b615b77b3cd5556c6b9ef83f2fa6a7d2b1f5597abc914d3e8056598677a4a"} Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.823452 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" event={"ID":"e8ebbe31-e042-46ed-9cdf-d58522420f63","Type":"ContainerDied","Data":"ce7e5771bf331392a82a9d90342218b39353d724176400ff49b2ca9676772cc8"} Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.823492 5043 scope.go:117] "RemoveContainer" containerID="23617065064bfc266cb7f42685bbdd1a717ebffb5ee876a3887c96002de3e4eb" Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.823560 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f7bbc55bc-5zq72" Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.847829 5043 scope.go:117] "RemoveContainer" containerID="d2ae87f6558827b3bbaad39b409090873a8c7fedb8a5a0e7a8ee099dcdb91145" Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.865870 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f7bbc55bc-5zq72"] Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.873173 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f7bbc55bc-5zq72"] Nov 25 07:36:20 crc kubenswrapper[5043]: I1125 07:36:20.980213 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8ebbe31-e042-46ed-9cdf-d58522420f63" path="/var/lib/kubelet/pods/e8ebbe31-e042-46ed-9cdf-d58522420f63/volumes" Nov 25 07:36:23 crc kubenswrapper[5043]: I1125 07:36:23.858089 5043 generic.go:334] "Generic (PLEG): container finished" podID="bc9f9c54-9408-4021-b741-ffd7c2f49f60" containerID="09a4616f6b79a50e29307112f9cf545b0bca42c4532b28686880865375b7296d" exitCode=0 Nov 25 07:36:23 crc kubenswrapper[5043]: I1125 07:36:23.858216 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wzdq9" event={"ID":"bc9f9c54-9408-4021-b741-ffd7c2f49f60","Type":"ContainerDied","Data":"09a4616f6b79a50e29307112f9cf545b0bca42c4532b28686880865375b7296d"} Nov 25 07:36:25 crc kubenswrapper[5043]: I1125 07:36:25.217829 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wzdq9" Nov 25 07:36:25 crc kubenswrapper[5043]: I1125 07:36:25.325907 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-config-data\") pod \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\" (UID: \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\") " Nov 25 07:36:25 crc kubenswrapper[5043]: I1125 07:36:25.326021 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-combined-ca-bundle\") pod \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\" (UID: \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\") " Nov 25 07:36:25 crc kubenswrapper[5043]: I1125 07:36:25.326047 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-scripts\") pod \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\" (UID: \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\") " Nov 25 07:36:25 crc kubenswrapper[5043]: I1125 07:36:25.326095 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw645\" (UniqueName: \"kubernetes.io/projected/bc9f9c54-9408-4021-b741-ffd7c2f49f60-kube-api-access-nw645\") pod \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\" (UID: \"bc9f9c54-9408-4021-b741-ffd7c2f49f60\") " Nov 25 07:36:25 crc kubenswrapper[5043]: I1125 07:36:25.331964 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-scripts" (OuterVolumeSpecName: "scripts") pod "bc9f9c54-9408-4021-b741-ffd7c2f49f60" (UID: "bc9f9c54-9408-4021-b741-ffd7c2f49f60"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:25 crc kubenswrapper[5043]: I1125 07:36:25.332378 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9f9c54-9408-4021-b741-ffd7c2f49f60-kube-api-access-nw645" (OuterVolumeSpecName: "kube-api-access-nw645") pod "bc9f9c54-9408-4021-b741-ffd7c2f49f60" (UID: "bc9f9c54-9408-4021-b741-ffd7c2f49f60"). InnerVolumeSpecName "kube-api-access-nw645". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:36:25 crc kubenswrapper[5043]: I1125 07:36:25.360995 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc9f9c54-9408-4021-b741-ffd7c2f49f60" (UID: "bc9f9c54-9408-4021-b741-ffd7c2f49f60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:25 crc kubenswrapper[5043]: I1125 07:36:25.364245 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-config-data" (OuterVolumeSpecName: "config-data") pod "bc9f9c54-9408-4021-b741-ffd7c2f49f60" (UID: "bc9f9c54-9408-4021-b741-ffd7c2f49f60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:25 crc kubenswrapper[5043]: I1125 07:36:25.428373 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:25 crc kubenswrapper[5043]: I1125 07:36:25.428408 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:25 crc kubenswrapper[5043]: I1125 07:36:25.428416 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw645\" (UniqueName: \"kubernetes.io/projected/bc9f9c54-9408-4021-b741-ffd7c2f49f60-kube-api-access-nw645\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:25 crc kubenswrapper[5043]: I1125 07:36:25.428427 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9f9c54-9408-4021-b741-ffd7c2f49f60-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:25 crc kubenswrapper[5043]: I1125 07:36:25.896921 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wzdq9" event={"ID":"bc9f9c54-9408-4021-b741-ffd7c2f49f60","Type":"ContainerDied","Data":"5d299ca6576839ee960f6f608ab664cfe16824440ea983cf57bf49d62df251c7"} Nov 25 07:36:25 crc kubenswrapper[5043]: I1125 07:36:25.896976 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d299ca6576839ee960f6f608ab664cfe16824440ea983cf57bf49d62df251c7" Nov 25 07:36:25 crc kubenswrapper[5043]: I1125 07:36:25.897051 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wzdq9" Nov 25 07:36:26 crc kubenswrapper[5043]: I1125 07:36:26.076632 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:36:26 crc kubenswrapper[5043]: I1125 07:36:26.077140 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="420a3043-443f-4d3b-97f5-6a0773e329b9" containerName="nova-api-log" containerID="cri-o://754ed0c4b8a49e3c391f5a8d06de9a7ebd3f0c4bfcb677d73964a5a13170cca7" gracePeriod=30 Nov 25 07:36:26 crc kubenswrapper[5043]: I1125 07:36:26.077242 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="420a3043-443f-4d3b-97f5-6a0773e329b9" containerName="nova-api-api" containerID="cri-o://a88e38086329fb4d0f462f2414efa3b8bbf66ad55afb6e4418665e02e0b63bf9" gracePeriod=30 Nov 25 07:36:26 crc kubenswrapper[5043]: I1125 07:36:26.102191 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:36:26 crc kubenswrapper[5043]: I1125 07:36:26.102400 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6aee8182-b123-4afd-aa11-7b987f0e1213" containerName="nova-scheduler-scheduler" containerID="cri-o://574e2f61fe6c28efcc1629927dd0f5849572bffb8d25c5e7c313e5a3f70c5fdb" gracePeriod=30 Nov 25 07:36:26 crc kubenswrapper[5043]: I1125 07:36:26.140305 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:36:26 crc kubenswrapper[5043]: I1125 07:36:26.140537 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6f863d4a-cbec-4cc9-b0e8-968685a1d72a" containerName="nova-metadata-log" containerID="cri-o://4493280d0f8032617e96b938485345ddf242e2e4825000068214fe95f7376a2c" gracePeriod=30 Nov 25 07:36:26 crc kubenswrapper[5043]: I1125 07:36:26.140639 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6f863d4a-cbec-4cc9-b0e8-968685a1d72a" containerName="nova-metadata-metadata" containerID="cri-o://822e97f8b985c23cfd845805dff56cc6e65255c7dd660b12a761f110c914e012" gracePeriod=30 Nov 25 07:36:26 crc kubenswrapper[5043]: I1125 07:36:26.907490 5043 generic.go:334] "Generic (PLEG): container finished" podID="420a3043-443f-4d3b-97f5-6a0773e329b9" containerID="a88e38086329fb4d0f462f2414efa3b8bbf66ad55afb6e4418665e02e0b63bf9" exitCode=0 Nov 25 07:36:26 crc kubenswrapper[5043]: I1125 07:36:26.908618 5043 generic.go:334] "Generic (PLEG): container finished" podID="420a3043-443f-4d3b-97f5-6a0773e329b9" containerID="754ed0c4b8a49e3c391f5a8d06de9a7ebd3f0c4bfcb677d73964a5a13170cca7" exitCode=143 Nov 25 07:36:26 crc kubenswrapper[5043]: I1125 07:36:26.907684 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"420a3043-443f-4d3b-97f5-6a0773e329b9","Type":"ContainerDied","Data":"a88e38086329fb4d0f462f2414efa3b8bbf66ad55afb6e4418665e02e0b63bf9"} Nov 25 07:36:26 crc kubenswrapper[5043]: I1125 07:36:26.908793 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"420a3043-443f-4d3b-97f5-6a0773e329b9","Type":"ContainerDied","Data":"754ed0c4b8a49e3c391f5a8d06de9a7ebd3f0c4bfcb677d73964a5a13170cca7"} Nov 25 07:36:26 crc kubenswrapper[5043]: I1125 07:36:26.914271 5043 generic.go:334] "Generic (PLEG): container finished" podID="6aee8182-b123-4afd-aa11-7b987f0e1213" containerID="574e2f61fe6c28efcc1629927dd0f5849572bffb8d25c5e7c313e5a3f70c5fdb" exitCode=0 Nov 25 07:36:26 crc kubenswrapper[5043]: I1125 07:36:26.914326 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6aee8182-b123-4afd-aa11-7b987f0e1213","Type":"ContainerDied","Data":"574e2f61fe6c28efcc1629927dd0f5849572bffb8d25c5e7c313e5a3f70c5fdb"} Nov 25 07:36:26 crc kubenswrapper[5043]: I1125 07:36:26.916357 5043 generic.go:334] "Generic (PLEG): container finished" podID="6f863d4a-cbec-4cc9-b0e8-968685a1d72a" containerID="4493280d0f8032617e96b938485345ddf242e2e4825000068214fe95f7376a2c" exitCode=143 Nov 25 07:36:26 crc kubenswrapper[5043]: I1125 07:36:26.916383 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f863d4a-cbec-4cc9-b0e8-968685a1d72a","Type":"ContainerDied","Data":"4493280d0f8032617e96b938485345ddf242e2e4825000068214fe95f7376a2c"} Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.067639 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.166477 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l52dx\" (UniqueName: \"kubernetes.io/projected/420a3043-443f-4d3b-97f5-6a0773e329b9-kube-api-access-l52dx\") pod \"420a3043-443f-4d3b-97f5-6a0773e329b9\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.166572 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-config-data\") pod \"420a3043-443f-4d3b-97f5-6a0773e329b9\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.166630 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/420a3043-443f-4d3b-97f5-6a0773e329b9-logs\") pod \"420a3043-443f-4d3b-97f5-6a0773e329b9\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.166653 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-combined-ca-bundle\") pod \"420a3043-443f-4d3b-97f5-6a0773e329b9\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.166682 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-internal-tls-certs\") pod \"420a3043-443f-4d3b-97f5-6a0773e329b9\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.166726 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-public-tls-certs\") pod \"420a3043-443f-4d3b-97f5-6a0773e329b9\" (UID: \"420a3043-443f-4d3b-97f5-6a0773e329b9\") " Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.167053 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/420a3043-443f-4d3b-97f5-6a0773e329b9-logs" (OuterVolumeSpecName: "logs") pod "420a3043-443f-4d3b-97f5-6a0773e329b9" (UID: "420a3043-443f-4d3b-97f5-6a0773e329b9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.167459 5043 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/420a3043-443f-4d3b-97f5-6a0773e329b9-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.175042 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/420a3043-443f-4d3b-97f5-6a0773e329b9-kube-api-access-l52dx" (OuterVolumeSpecName: "kube-api-access-l52dx") pod "420a3043-443f-4d3b-97f5-6a0773e329b9" (UID: "420a3043-443f-4d3b-97f5-6a0773e329b9"). InnerVolumeSpecName "kube-api-access-l52dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.175091 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.194475 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "420a3043-443f-4d3b-97f5-6a0773e329b9" (UID: "420a3043-443f-4d3b-97f5-6a0773e329b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.231588 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-config-data" (OuterVolumeSpecName: "config-data") pod "420a3043-443f-4d3b-97f5-6a0773e329b9" (UID: "420a3043-443f-4d3b-97f5-6a0773e329b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.251202 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "420a3043-443f-4d3b-97f5-6a0773e329b9" (UID: "420a3043-443f-4d3b-97f5-6a0773e329b9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.255657 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "420a3043-443f-4d3b-97f5-6a0773e329b9" (UID: "420a3043-443f-4d3b-97f5-6a0773e329b9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.268307 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aee8182-b123-4afd-aa11-7b987f0e1213-combined-ca-bundle\") pod \"6aee8182-b123-4afd-aa11-7b987f0e1213\" (UID: \"6aee8182-b123-4afd-aa11-7b987f0e1213\") " Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.268427 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmscm\" (UniqueName: \"kubernetes.io/projected/6aee8182-b123-4afd-aa11-7b987f0e1213-kube-api-access-lmscm\") pod \"6aee8182-b123-4afd-aa11-7b987f0e1213\" (UID: \"6aee8182-b123-4afd-aa11-7b987f0e1213\") " Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.268584 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aee8182-b123-4afd-aa11-7b987f0e1213-config-data\") pod \"6aee8182-b123-4afd-aa11-7b987f0e1213\" (UID: \"6aee8182-b123-4afd-aa11-7b987f0e1213\") " Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.269359 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.269390 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.269399 5043 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.269407 5043 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/420a3043-443f-4d3b-97f5-6a0773e329b9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.269418 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l52dx\" (UniqueName: \"kubernetes.io/projected/420a3043-443f-4d3b-97f5-6a0773e329b9-kube-api-access-l52dx\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.272554 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aee8182-b123-4afd-aa11-7b987f0e1213-kube-api-access-lmscm" (OuterVolumeSpecName: "kube-api-access-lmscm") pod "6aee8182-b123-4afd-aa11-7b987f0e1213" (UID: "6aee8182-b123-4afd-aa11-7b987f0e1213"). InnerVolumeSpecName "kube-api-access-lmscm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.291589 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aee8182-b123-4afd-aa11-7b987f0e1213-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6aee8182-b123-4afd-aa11-7b987f0e1213" (UID: "6aee8182-b123-4afd-aa11-7b987f0e1213"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.291642 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aee8182-b123-4afd-aa11-7b987f0e1213-config-data" (OuterVolumeSpecName: "config-data") pod "6aee8182-b123-4afd-aa11-7b987f0e1213" (UID: "6aee8182-b123-4afd-aa11-7b987f0e1213"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.371188 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aee8182-b123-4afd-aa11-7b987f0e1213-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.371214 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aee8182-b123-4afd-aa11-7b987f0e1213-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.371225 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmscm\" (UniqueName: \"kubernetes.io/projected/6aee8182-b123-4afd-aa11-7b987f0e1213-kube-api-access-lmscm\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.932279 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"420a3043-443f-4d3b-97f5-6a0773e329b9","Type":"ContainerDied","Data":"c9ff6b30abc4721a91be3abe0e1bf0fd1102ce399b979fc458f26b957c4133c3"} Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.932743 5043 scope.go:117] "RemoveContainer" containerID="a88e38086329fb4d0f462f2414efa3b8bbf66ad55afb6e4418665e02e0b63bf9" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.932302 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.942569 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6aee8182-b123-4afd-aa11-7b987f0e1213","Type":"ContainerDied","Data":"c67bae05faec8fcb5aa8707535dedf1c5460a9be5afa3b93dcf4ab48843b22e6"} Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.942690 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.968975 5043 scope.go:117] "RemoveContainer" containerID="754ed0c4b8a49e3c391f5a8d06de9a7ebd3f0c4bfcb677d73964a5a13170cca7" Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.986516 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:36:27 crc kubenswrapper[5043]: I1125 07:36:27.999111 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.011118 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.012373 5043 scope.go:117] "RemoveContainer" containerID="574e2f61fe6c28efcc1629927dd0f5849572bffb8d25c5e7c313e5a3f70c5fdb" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.024878 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 07:36:28 crc kubenswrapper[5043]: E1125 07:36:28.025339 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ebbe31-e042-46ed-9cdf-d58522420f63" containerName="dnsmasq-dns" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.025354 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ebbe31-e042-46ed-9cdf-d58522420f63" containerName="dnsmasq-dns" Nov 25 07:36:28 crc kubenswrapper[5043]: E1125 07:36:28.025375 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420a3043-443f-4d3b-97f5-6a0773e329b9" containerName="nova-api-log" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.025383 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="420a3043-443f-4d3b-97f5-6a0773e329b9" containerName="nova-api-log" Nov 25 07:36:28 crc kubenswrapper[5043]: E1125 07:36:28.025395 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420a3043-443f-4d3b-97f5-6a0773e329b9" containerName="nova-api-api" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.025403 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="420a3043-443f-4d3b-97f5-6a0773e329b9" containerName="nova-api-api" Nov 25 07:36:28 crc kubenswrapper[5043]: E1125 07:36:28.025415 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9f9c54-9408-4021-b741-ffd7c2f49f60" containerName="nova-manage" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.025422 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9f9c54-9408-4021-b741-ffd7c2f49f60" containerName="nova-manage" Nov 25 07:36:28 crc kubenswrapper[5043]: E1125 07:36:28.025446 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aee8182-b123-4afd-aa11-7b987f0e1213" containerName="nova-scheduler-scheduler" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.025453 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aee8182-b123-4afd-aa11-7b987f0e1213" containerName="nova-scheduler-scheduler" Nov 25 07:36:28 crc kubenswrapper[5043]: E1125 07:36:28.025462 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8ebbe31-e042-46ed-9cdf-d58522420f63" containerName="init" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.025470 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8ebbe31-e042-46ed-9cdf-d58522420f63" containerName="init" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.025705 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aee8182-b123-4afd-aa11-7b987f0e1213" containerName="nova-scheduler-scheduler" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.025723 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9f9c54-9408-4021-b741-ffd7c2f49f60" containerName="nova-manage" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.025739 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="420a3043-443f-4d3b-97f5-6a0773e329b9" containerName="nova-api-api" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.025752 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="420a3043-443f-4d3b-97f5-6a0773e329b9" containerName="nova-api-log" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.025763 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8ebbe31-e042-46ed-9cdf-d58522420f63" containerName="dnsmasq-dns" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.026935 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.029880 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.030183 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.030638 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.032240 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.057192 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.060505 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.063537 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.075510 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.083439 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.084805 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0bd148f-caab-423f-88d5-45392e63775d-logs\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.084856 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0bd148f-caab-423f-88d5-45392e63775d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.084958 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0bd148f-caab-423f-88d5-45392e63775d-config-data\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.085037 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0bd148f-caab-423f-88d5-45392e63775d-public-tls-certs\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.085182 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bd148f-caab-423f-88d5-45392e63775d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.085237 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7k9z\" (UniqueName: \"kubernetes.io/projected/e0bd148f-caab-423f-88d5-45392e63775d-kube-api-access-b7k9z\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.187997 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0bd148f-caab-423f-88d5-45392e63775d-logs\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.188039 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0bd148f-caab-423f-88d5-45392e63775d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.188104 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be59e894-a929-4498-bee2-cf852ca1ae67-config-data\") pod \"nova-scheduler-0\" (UID: \"be59e894-a929-4498-bee2-cf852ca1ae67\") " pod="openstack/nova-scheduler-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.188135 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0bd148f-caab-423f-88d5-45392e63775d-config-data\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.188167 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be59e894-a929-4498-bee2-cf852ca1ae67-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"be59e894-a929-4498-bee2-cf852ca1ae67\") " pod="openstack/nova-scheduler-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.188187 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk6nb\" (UniqueName: \"kubernetes.io/projected/be59e894-a929-4498-bee2-cf852ca1ae67-kube-api-access-nk6nb\") pod \"nova-scheduler-0\" (UID: \"be59e894-a929-4498-bee2-cf852ca1ae67\") " pod="openstack/nova-scheduler-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.188225 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0bd148f-caab-423f-88d5-45392e63775d-public-tls-certs\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.188253 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bd148f-caab-423f-88d5-45392e63775d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.188274 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7k9z\" (UniqueName: \"kubernetes.io/projected/e0bd148f-caab-423f-88d5-45392e63775d-kube-api-access-b7k9z\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.188874 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0bd148f-caab-423f-88d5-45392e63775d-logs\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.193469 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0bd148f-caab-423f-88d5-45392e63775d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.194067 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0bd148f-caab-423f-88d5-45392e63775d-config-data\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.194419 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0bd148f-caab-423f-88d5-45392e63775d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.195620 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0bd148f-caab-423f-88d5-45392e63775d-public-tls-certs\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.203461 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7k9z\" (UniqueName: \"kubernetes.io/projected/e0bd148f-caab-423f-88d5-45392e63775d-kube-api-access-b7k9z\") pod \"nova-api-0\" (UID: \"e0bd148f-caab-423f-88d5-45392e63775d\") " pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.290959 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be59e894-a929-4498-bee2-cf852ca1ae67-config-data\") pod \"nova-scheduler-0\" (UID: \"be59e894-a929-4498-bee2-cf852ca1ae67\") " pod="openstack/nova-scheduler-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.291185 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be59e894-a929-4498-bee2-cf852ca1ae67-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"be59e894-a929-4498-bee2-cf852ca1ae67\") " pod="openstack/nova-scheduler-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.291255 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk6nb\" (UniqueName: \"kubernetes.io/projected/be59e894-a929-4498-bee2-cf852ca1ae67-kube-api-access-nk6nb\") pod \"nova-scheduler-0\" (UID: \"be59e894-a929-4498-bee2-cf852ca1ae67\") " pod="openstack/nova-scheduler-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.296012 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be59e894-a929-4498-bee2-cf852ca1ae67-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"be59e894-a929-4498-bee2-cf852ca1ae67\") " pod="openstack/nova-scheduler-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.296240 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be59e894-a929-4498-bee2-cf852ca1ae67-config-data\") pod \"nova-scheduler-0\" (UID: \"be59e894-a929-4498-bee2-cf852ca1ae67\") " pod="openstack/nova-scheduler-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.320414 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk6nb\" (UniqueName: \"kubernetes.io/projected/be59e894-a929-4498-bee2-cf852ca1ae67-kube-api-access-nk6nb\") pod \"nova-scheduler-0\" (UID: \"be59e894-a929-4498-bee2-cf852ca1ae67\") " pod="openstack/nova-scheduler-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.355838 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.385774 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.802498 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 07:36:28 crc kubenswrapper[5043]: W1125 07:36:28.810364 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0bd148f_caab_423f_88d5_45392e63775d.slice/crio-fb83e851492a6fa4e98cfb5abbdf8f08d6e78f2eadc6aa63a7d25d4aea37f792 WatchSource:0}: Error finding container fb83e851492a6fa4e98cfb5abbdf8f08d6e78f2eadc6aa63a7d25d4aea37f792: Status 404 returned error can't find the container with id fb83e851492a6fa4e98cfb5abbdf8f08d6e78f2eadc6aa63a7d25d4aea37f792 Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.870234 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 07:36:28 crc kubenswrapper[5043]: W1125 07:36:28.885853 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe59e894_a929_4498_bee2_cf852ca1ae67.slice/crio-33a2ee88d666b5c910536c91f5e3b64162802973e110dc82691242bf8b74fab9 WatchSource:0}: Error finding container 33a2ee88d666b5c910536c91f5e3b64162802973e110dc82691242bf8b74fab9: Status 404 returned error can't find the container with id 33a2ee88d666b5c910536c91f5e3b64162802973e110dc82691242bf8b74fab9 Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.954083 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"be59e894-a929-4498-bee2-cf852ca1ae67","Type":"ContainerStarted","Data":"33a2ee88d666b5c910536c91f5e3b64162802973e110dc82691242bf8b74fab9"} Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.959934 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0bd148f-caab-423f-88d5-45392e63775d","Type":"ContainerStarted","Data":"fb83e851492a6fa4e98cfb5abbdf8f08d6e78f2eadc6aa63a7d25d4aea37f792"} Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.981840 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="420a3043-443f-4d3b-97f5-6a0773e329b9" path="/var/lib/kubelet/pods/420a3043-443f-4d3b-97f5-6a0773e329b9/volumes" Nov 25 07:36:28 crc kubenswrapper[5043]: I1125 07:36:28.983407 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aee8182-b123-4afd-aa11-7b987f0e1213" path="/var/lib/kubelet/pods/6aee8182-b123-4afd-aa11-7b987f0e1213/volumes" Nov 25 07:36:29 crc kubenswrapper[5043]: I1125 07:36:29.553211 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6f863d4a-cbec-4cc9-b0e8-968685a1d72a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": read tcp 10.217.0.2:57282->10.217.0.183:8775: read: connection reset by peer" Nov 25 07:36:29 crc kubenswrapper[5043]: I1125 07:36:29.553229 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6f863d4a-cbec-4cc9-b0e8-968685a1d72a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": read tcp 10.217.0.2:57284->10.217.0.183:8775: read: connection reset by peer" Nov 25 07:36:29 crc kubenswrapper[5043]: I1125 07:36:29.975277 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0bd148f-caab-423f-88d5-45392e63775d","Type":"ContainerStarted","Data":"ecb6410c4895e9eceeac7e1ecac8b6c1f6cc11ca2b292d618f5f3bf57d453cd2"} Nov 25 07:36:29 crc kubenswrapper[5043]: I1125 07:36:29.975624 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e0bd148f-caab-423f-88d5-45392e63775d","Type":"ContainerStarted","Data":"0a4e770b43ceeeee56415e75517c8ca2ca99e4ee420d859b9fd087f22db6a768"} Nov 25 07:36:29 crc kubenswrapper[5043]: I1125 07:36:29.977753 5043 generic.go:334] "Generic (PLEG): container finished" podID="6f863d4a-cbec-4cc9-b0e8-968685a1d72a" containerID="822e97f8b985c23cfd845805dff56cc6e65255c7dd660b12a761f110c914e012" exitCode=0 Nov 25 07:36:29 crc kubenswrapper[5043]: I1125 07:36:29.977797 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f863d4a-cbec-4cc9-b0e8-968685a1d72a","Type":"ContainerDied","Data":"822e97f8b985c23cfd845805dff56cc6e65255c7dd660b12a761f110c914e012"} Nov 25 07:36:29 crc kubenswrapper[5043]: I1125 07:36:29.978877 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"be59e894-a929-4498-bee2-cf852ca1ae67","Type":"ContainerStarted","Data":"650a556633da5eb4b7683f76f258d374d43da7b1459bba20876f6f9c573a5c88"} Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.002132 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.00211438 podStartE2EDuration="3.00211438s" podCreationTimestamp="2025-11-25 07:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:36:29.996783406 +0000 UTC m=+1254.164979137" watchObservedRunningTime="2025-11-25 07:36:30.00211438 +0000 UTC m=+1254.170310111" Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.019270 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.019252802 podStartE2EDuration="2.019252802s" podCreationTimestamp="2025-11-25 07:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:36:30.016177839 +0000 UTC m=+1254.184373570" watchObservedRunningTime="2025-11-25 07:36:30.019252802 +0000 UTC m=+1254.187448533" Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.074761 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.139511 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-nova-metadata-tls-certs\") pod \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.139574 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-config-data\") pod \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.139622 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-combined-ca-bundle\") pod \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.139782 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-logs\") pod \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.139809 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpljz\" (UniqueName: \"kubernetes.io/projected/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-kube-api-access-jpljz\") pod \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\" (UID: \"6f863d4a-cbec-4cc9-b0e8-968685a1d72a\") " Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.140166 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-logs" (OuterVolumeSpecName: "logs") pod "6f863d4a-cbec-4cc9-b0e8-968685a1d72a" (UID: "6f863d4a-cbec-4cc9-b0e8-968685a1d72a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.140268 5043 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-logs\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.145237 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-kube-api-access-jpljz" (OuterVolumeSpecName: "kube-api-access-jpljz") pod "6f863d4a-cbec-4cc9-b0e8-968685a1d72a" (UID: "6f863d4a-cbec-4cc9-b0e8-968685a1d72a"). InnerVolumeSpecName "kube-api-access-jpljz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.183707 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-config-data" (OuterVolumeSpecName: "config-data") pod "6f863d4a-cbec-4cc9-b0e8-968685a1d72a" (UID: "6f863d4a-cbec-4cc9-b0e8-968685a1d72a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.184049 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f863d4a-cbec-4cc9-b0e8-968685a1d72a" (UID: "6f863d4a-cbec-4cc9-b0e8-968685a1d72a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.208823 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6f863d4a-cbec-4cc9-b0e8-968685a1d72a" (UID: "6f863d4a-cbec-4cc9-b0e8-968685a1d72a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.242820 5043 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.242854 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.242865 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:30 crc kubenswrapper[5043]: I1125 07:36:30.242874 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpljz\" (UniqueName: \"kubernetes.io/projected/6f863d4a-cbec-4cc9-b0e8-968685a1d72a-kube-api-access-jpljz\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.014720 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f863d4a-cbec-4cc9-b0e8-968685a1d72a","Type":"ContainerDied","Data":"bd49e4899a491f3eb5e7dc78a53ba6974f4da0c54e3641a6d14d2ed768435fe7"} Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.015387 5043 scope.go:117] "RemoveContainer" containerID="822e97f8b985c23cfd845805dff56cc6e65255c7dd660b12a761f110c914e012" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.014807 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.053789 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.065710 5043 scope.go:117] "RemoveContainer" containerID="4493280d0f8032617e96b938485345ddf242e2e4825000068214fe95f7376a2c" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.074662 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.092141 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:36:31 crc kubenswrapper[5043]: E1125 07:36:31.092804 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f863d4a-cbec-4cc9-b0e8-968685a1d72a" containerName="nova-metadata-log" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.092841 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f863d4a-cbec-4cc9-b0e8-968685a1d72a" containerName="nova-metadata-log" Nov 25 07:36:31 crc kubenswrapper[5043]: E1125 07:36:31.092873 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f863d4a-cbec-4cc9-b0e8-968685a1d72a" containerName="nova-metadata-metadata" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.092887 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f863d4a-cbec-4cc9-b0e8-968685a1d72a" containerName="nova-metadata-metadata" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.093216 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f863d4a-cbec-4cc9-b0e8-968685a1d72a" containerName="nova-metadata-metadata" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.093266 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f863d4a-cbec-4cc9-b0e8-968685a1d72a" containerName="nova-metadata-log" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.094819 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.096968 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.100311 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.100792 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.156745 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/990680d0-bb9d-44b9-a67a-2af274498f7c-config-data\") pod \"nova-metadata-0\" (UID: \"990680d0-bb9d-44b9-a67a-2af274498f7c\") " pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.156859 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/990680d0-bb9d-44b9-a67a-2af274498f7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"990680d0-bb9d-44b9-a67a-2af274498f7c\") " pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.156892 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990680d0-bb9d-44b9-a67a-2af274498f7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"990680d0-bb9d-44b9-a67a-2af274498f7c\") " pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.156998 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990680d0-bb9d-44b9-a67a-2af274498f7c-logs\") pod \"nova-metadata-0\" (UID: \"990680d0-bb9d-44b9-a67a-2af274498f7c\") " pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.157044 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr9bc\" (UniqueName: \"kubernetes.io/projected/990680d0-bb9d-44b9-a67a-2af274498f7c-kube-api-access-lr9bc\") pod \"nova-metadata-0\" (UID: \"990680d0-bb9d-44b9-a67a-2af274498f7c\") " pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.258010 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr9bc\" (UniqueName: \"kubernetes.io/projected/990680d0-bb9d-44b9-a67a-2af274498f7c-kube-api-access-lr9bc\") pod \"nova-metadata-0\" (UID: \"990680d0-bb9d-44b9-a67a-2af274498f7c\") " pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.258346 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/990680d0-bb9d-44b9-a67a-2af274498f7c-config-data\") pod \"nova-metadata-0\" (UID: \"990680d0-bb9d-44b9-a67a-2af274498f7c\") " pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.258465 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/990680d0-bb9d-44b9-a67a-2af274498f7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"990680d0-bb9d-44b9-a67a-2af274498f7c\") " pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.258538 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990680d0-bb9d-44b9-a67a-2af274498f7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"990680d0-bb9d-44b9-a67a-2af274498f7c\") " pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.258674 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990680d0-bb9d-44b9-a67a-2af274498f7c-logs\") pod \"nova-metadata-0\" (UID: \"990680d0-bb9d-44b9-a67a-2af274498f7c\") " pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.259171 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990680d0-bb9d-44b9-a67a-2af274498f7c-logs\") pod \"nova-metadata-0\" (UID: \"990680d0-bb9d-44b9-a67a-2af274498f7c\") " pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.262597 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/990680d0-bb9d-44b9-a67a-2af274498f7c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"990680d0-bb9d-44b9-a67a-2af274498f7c\") " pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.263474 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990680d0-bb9d-44b9-a67a-2af274498f7c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"990680d0-bb9d-44b9-a67a-2af274498f7c\") " pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.285459 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/990680d0-bb9d-44b9-a67a-2af274498f7c-config-data\") pod \"nova-metadata-0\" (UID: \"990680d0-bb9d-44b9-a67a-2af274498f7c\") " pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.288056 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr9bc\" (UniqueName: \"kubernetes.io/projected/990680d0-bb9d-44b9-a67a-2af274498f7c-kube-api-access-lr9bc\") pod \"nova-metadata-0\" (UID: \"990680d0-bb9d-44b9-a67a-2af274498f7c\") " pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.418489 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 07:36:31 crc kubenswrapper[5043]: I1125 07:36:31.889009 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 07:36:32 crc kubenswrapper[5043]: I1125 07:36:32.026103 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"990680d0-bb9d-44b9-a67a-2af274498f7c","Type":"ContainerStarted","Data":"412e2c204eba2064007eae665fec752f518716197a3e9b8f1a9aeee9ce25c531"} Nov 25 07:36:32 crc kubenswrapper[5043]: I1125 07:36:32.987565 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f863d4a-cbec-4cc9-b0e8-968685a1d72a" path="/var/lib/kubelet/pods/6f863d4a-cbec-4cc9-b0e8-968685a1d72a/volumes" Nov 25 07:36:33 crc kubenswrapper[5043]: I1125 07:36:33.037211 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"990680d0-bb9d-44b9-a67a-2af274498f7c","Type":"ContainerStarted","Data":"0b7f71688c8e3baa7e59232c3d2378ff46ab4e5ad7bcd32a06275bec974ad4c6"} Nov 25 07:36:33 crc kubenswrapper[5043]: I1125 07:36:33.037253 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"990680d0-bb9d-44b9-a67a-2af274498f7c","Type":"ContainerStarted","Data":"571d688443c4adb3b8d8d776c9449b775bc1b8f7fe6d81fb5690d8b1b376fb7d"} Nov 25 07:36:33 crc kubenswrapper[5043]: I1125 07:36:33.386378 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 07:36:36 crc kubenswrapper[5043]: I1125 07:36:36.419044 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 07:36:36 crc kubenswrapper[5043]: I1125 07:36:36.419621 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 07:36:38 crc kubenswrapper[5043]: I1125 07:36:38.356284 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 07:36:38 crc kubenswrapper[5043]: I1125 07:36:38.356679 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 07:36:38 crc kubenswrapper[5043]: I1125 07:36:38.386042 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 07:36:38 crc kubenswrapper[5043]: I1125 07:36:38.427471 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 07:36:38 crc kubenswrapper[5043]: I1125 07:36:38.453378 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=7.453320156 podStartE2EDuration="7.453320156s" podCreationTimestamp="2025-11-25 07:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:36:33.080089498 +0000 UTC m=+1257.248285219" watchObservedRunningTime="2025-11-25 07:36:38.453320156 +0000 UTC m=+1262.621515907" Nov 25 07:36:39 crc kubenswrapper[5043]: I1125 07:36:39.158407 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 07:36:39 crc kubenswrapper[5043]: I1125 07:36:39.372740 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e0bd148f-caab-423f-88d5-45392e63775d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.192:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 07:36:39 crc kubenswrapper[5043]: I1125 07:36:39.372822 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e0bd148f-caab-423f-88d5-45392e63775d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.192:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 07:36:41 crc kubenswrapper[5043]: I1125 07:36:41.419745 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 07:36:41 crc kubenswrapper[5043]: I1125 07:36:41.420178 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 07:36:42 crc kubenswrapper[5043]: I1125 07:36:42.435768 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="990680d0-bb9d-44b9-a67a-2af274498f7c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 07:36:42 crc kubenswrapper[5043]: I1125 07:36:42.435763 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="990680d0-bb9d-44b9-a67a-2af274498f7c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 07:36:43 crc kubenswrapper[5043]: I1125 07:36:43.114519 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.206385 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.224108 5043 generic.go:334] "Generic (PLEG): container finished" podID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerID="2a351b676547773703c0423766f2051adf7c611ff79575f78ce34b932e02c720" exitCode=137 Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.224147 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb","Type":"ContainerDied","Data":"2a351b676547773703c0423766f2051adf7c611ff79575f78ce34b932e02c720"} Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.224175 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb","Type":"ContainerDied","Data":"3054ac41fa4920f7f671b20e3216cb00fa50e9765a2f76bdfbc19ac23c5e3dfe"} Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.224191 5043 scope.go:117] "RemoveContainer" containerID="2a351b676547773703c0423766f2051adf7c611ff79575f78ce34b932e02c720" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.224190 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.268621 5043 scope.go:117] "RemoveContainer" containerID="f6cc27c2c350894ebf43e8ec49cc712624850c354b8782c5c64d9414af77d155" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.293163 5043 scope.go:117] "RemoveContainer" containerID="12de91fb841a133bbc5663704847c14469dae1d857aae013d54fcaee658ce198" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.301184 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-combined-ca-bundle\") pod \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.301341 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-ceilometer-tls-certs\") pod \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.301396 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-config-data\") pod \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.301815 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-sg-core-conf-yaml\") pod \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.301934 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-scripts\") pod \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.302046 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-run-httpd\") pod \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.302072 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl5tr\" (UniqueName: \"kubernetes.io/projected/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-kube-api-access-xl5tr\") pod \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.302089 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-log-httpd\") pod \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\" (UID: \"fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb\") " Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.305662 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" (UID: "fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.308300 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" (UID: "fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.315111 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-kube-api-access-xl5tr" (OuterVolumeSpecName: "kube-api-access-xl5tr") pod "fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" (UID: "fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb"). InnerVolumeSpecName "kube-api-access-xl5tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.316343 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-scripts" (OuterVolumeSpecName: "scripts") pod "fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" (UID: "fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.325034 5043 scope.go:117] "RemoveContainer" containerID="b05b615b77b3cd5556c6b9ef83f2fa6a7d2b1f5597abc914d3e8056598677a4a" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.342597 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" (UID: "fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.351306 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" (UID: "fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.365125 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.366469 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.372899 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.375825 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.389091 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" (UID: "fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.404228 5043 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.404363 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl5tr\" (UniqueName: \"kubernetes.io/projected/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-kube-api-access-xl5tr\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.404672 5043 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.404755 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.404832 5043 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.404909 5043 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.404991 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.412749 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-config-data" (OuterVolumeSpecName: "config-data") pod "fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" (UID: "fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.446310 5043 scope.go:117] "RemoveContainer" containerID="2a351b676547773703c0423766f2051adf7c611ff79575f78ce34b932e02c720" Nov 25 07:36:48 crc kubenswrapper[5043]: E1125 07:36:48.448404 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a351b676547773703c0423766f2051adf7c611ff79575f78ce34b932e02c720\": container with ID starting with 2a351b676547773703c0423766f2051adf7c611ff79575f78ce34b932e02c720 not found: ID does not exist" containerID="2a351b676547773703c0423766f2051adf7c611ff79575f78ce34b932e02c720" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.448566 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a351b676547773703c0423766f2051adf7c611ff79575f78ce34b932e02c720"} err="failed to get container status \"2a351b676547773703c0423766f2051adf7c611ff79575f78ce34b932e02c720\": rpc error: code = NotFound desc = could not find container \"2a351b676547773703c0423766f2051adf7c611ff79575f78ce34b932e02c720\": container with ID starting with 2a351b676547773703c0423766f2051adf7c611ff79575f78ce34b932e02c720 not found: ID does not exist" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.448634 5043 scope.go:117] "RemoveContainer" containerID="f6cc27c2c350894ebf43e8ec49cc712624850c354b8782c5c64d9414af77d155" Nov 25 07:36:48 crc kubenswrapper[5043]: E1125 07:36:48.449051 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6cc27c2c350894ebf43e8ec49cc712624850c354b8782c5c64d9414af77d155\": container with ID starting with f6cc27c2c350894ebf43e8ec49cc712624850c354b8782c5c64d9414af77d155 not found: ID does not exist" containerID="f6cc27c2c350894ebf43e8ec49cc712624850c354b8782c5c64d9414af77d155" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.449177 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6cc27c2c350894ebf43e8ec49cc712624850c354b8782c5c64d9414af77d155"} err="failed to get container status \"f6cc27c2c350894ebf43e8ec49cc712624850c354b8782c5c64d9414af77d155\": rpc error: code = NotFound desc = could not find container \"f6cc27c2c350894ebf43e8ec49cc712624850c354b8782c5c64d9414af77d155\": container with ID starting with f6cc27c2c350894ebf43e8ec49cc712624850c354b8782c5c64d9414af77d155 not found: ID does not exist" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.449198 5043 scope.go:117] "RemoveContainer" containerID="12de91fb841a133bbc5663704847c14469dae1d857aae013d54fcaee658ce198" Nov 25 07:36:48 crc kubenswrapper[5043]: E1125 07:36:48.449572 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12de91fb841a133bbc5663704847c14469dae1d857aae013d54fcaee658ce198\": container with ID starting with 12de91fb841a133bbc5663704847c14469dae1d857aae013d54fcaee658ce198 not found: ID does not exist" containerID="12de91fb841a133bbc5663704847c14469dae1d857aae013d54fcaee658ce198" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.449717 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12de91fb841a133bbc5663704847c14469dae1d857aae013d54fcaee658ce198"} err="failed to get container status \"12de91fb841a133bbc5663704847c14469dae1d857aae013d54fcaee658ce198\": rpc error: code = NotFound desc = could not find container \"12de91fb841a133bbc5663704847c14469dae1d857aae013d54fcaee658ce198\": container with ID starting with 12de91fb841a133bbc5663704847c14469dae1d857aae013d54fcaee658ce198 not found: ID does not exist" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.449740 5043 scope.go:117] "RemoveContainer" containerID="b05b615b77b3cd5556c6b9ef83f2fa6a7d2b1f5597abc914d3e8056598677a4a" Nov 25 07:36:48 crc kubenswrapper[5043]: E1125 07:36:48.450062 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b05b615b77b3cd5556c6b9ef83f2fa6a7d2b1f5597abc914d3e8056598677a4a\": container with ID starting with b05b615b77b3cd5556c6b9ef83f2fa6a7d2b1f5597abc914d3e8056598677a4a not found: ID does not exist" containerID="b05b615b77b3cd5556c6b9ef83f2fa6a7d2b1f5597abc914d3e8056598677a4a" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.450111 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b05b615b77b3cd5556c6b9ef83f2fa6a7d2b1f5597abc914d3e8056598677a4a"} err="failed to get container status \"b05b615b77b3cd5556c6b9ef83f2fa6a7d2b1f5597abc914d3e8056598677a4a\": rpc error: code = NotFound desc = could not find container \"b05b615b77b3cd5556c6b9ef83f2fa6a7d2b1f5597abc914d3e8056598677a4a\": container with ID starting with b05b615b77b3cd5556c6b9ef83f2fa6a7d2b1f5597abc914d3e8056598677a4a not found: ID does not exist" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.507019 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.560823 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.571530 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.586577 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:36:48 crc kubenswrapper[5043]: E1125 07:36:48.587203 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerName="ceilometer-central-agent" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.587274 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerName="ceilometer-central-agent" Nov 25 07:36:48 crc kubenswrapper[5043]: E1125 07:36:48.587345 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerName="sg-core" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.587406 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerName="sg-core" Nov 25 07:36:48 crc kubenswrapper[5043]: E1125 07:36:48.587470 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerName="ceilometer-notification-agent" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.587527 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerName="ceilometer-notification-agent" Nov 25 07:36:48 crc kubenswrapper[5043]: E1125 07:36:48.587616 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerName="proxy-httpd" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.587694 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerName="proxy-httpd" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.587945 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerName="proxy-httpd" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.588014 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerName="sg-core" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.588074 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerName="ceilometer-notification-agent" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.588138 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" containerName="ceilometer-central-agent" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.589933 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.592806 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.593250 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.594148 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.651947 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.753852 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.754416 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.754511 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.754578 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08930914-949f-403c-800a-88f0cda8fbd4-log-httpd\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.754628 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-456zd\" (UniqueName: \"kubernetes.io/projected/08930914-949f-403c-800a-88f0cda8fbd4-kube-api-access-456zd\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.754654 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-config-data\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.754782 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-scripts\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.754829 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08930914-949f-403c-800a-88f0cda8fbd4-run-httpd\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.857529 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.857642 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.857686 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.857742 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08930914-949f-403c-800a-88f0cda8fbd4-log-httpd\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.857782 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-456zd\" (UniqueName: \"kubernetes.io/projected/08930914-949f-403c-800a-88f0cda8fbd4-kube-api-access-456zd\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.857817 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-config-data\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.857908 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-scripts\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.857955 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08930914-949f-403c-800a-88f0cda8fbd4-run-httpd\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.858833 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08930914-949f-403c-800a-88f0cda8fbd4-run-httpd\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.865072 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.865256 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.866485 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08930914-949f-403c-800a-88f0cda8fbd4-log-httpd\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.869385 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.870820 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-config-data\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.873920 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-scripts\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.894784 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-456zd\" (UniqueName: \"kubernetes.io/projected/08930914-949f-403c-800a-88f0cda8fbd4-kube-api-access-456zd\") pod \"ceilometer-0\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.962193 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 07:36:48 crc kubenswrapper[5043]: I1125 07:36:48.978090 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb" path="/var/lib/kubelet/pods/fb2b4f4c-68b2-4f0d-9fd2-429148fa7bfb/volumes" Nov 25 07:36:49 crc kubenswrapper[5043]: I1125 07:36:49.234201 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 07:36:49 crc kubenswrapper[5043]: I1125 07:36:49.246623 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 07:36:49 crc kubenswrapper[5043]: I1125 07:36:49.501947 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 07:36:49 crc kubenswrapper[5043]: I1125 07:36:49.505016 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 07:36:50 crc kubenswrapper[5043]: I1125 07:36:50.249056 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08930914-949f-403c-800a-88f0cda8fbd4","Type":"ContainerStarted","Data":"c7c53b92abb24b746a4764737cd42c07529490a9ecb8d26daab5c5c2d8a2a9dd"} Nov 25 07:36:50 crc kubenswrapper[5043]: I1125 07:36:50.249478 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08930914-949f-403c-800a-88f0cda8fbd4","Type":"ContainerStarted","Data":"609b9ded574cc611428bd67fb02b17c90f7e513c08292239f499433887f6a940"} Nov 25 07:36:51 crc kubenswrapper[5043]: I1125 07:36:51.432969 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 07:36:51 crc kubenswrapper[5043]: I1125 07:36:51.434224 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 07:36:51 crc kubenswrapper[5043]: I1125 07:36:51.441568 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 07:36:51 crc kubenswrapper[5043]: I1125 07:36:51.442109 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 07:36:52 crc kubenswrapper[5043]: I1125 07:36:52.277490 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08930914-949f-403c-800a-88f0cda8fbd4","Type":"ContainerStarted","Data":"74fd965156c8c4f06579aeed4b010088dc6cfc2d206aaab314ce30a99a60c818"} Nov 25 07:36:52 crc kubenswrapper[5043]: I1125 07:36:52.277822 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08930914-949f-403c-800a-88f0cda8fbd4","Type":"ContainerStarted","Data":"01dd3438fa449687359359e2d37728d77ef801d1f9dcdc9aeed28fe86dc69936"} Nov 25 07:36:54 crc kubenswrapper[5043]: I1125 07:36:54.303924 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08930914-949f-403c-800a-88f0cda8fbd4","Type":"ContainerStarted","Data":"254b1c90ca8a6ffad4232f3f172e63847e0aa419554d01103332693d71a3b059"} Nov 25 07:36:54 crc kubenswrapper[5043]: I1125 07:36:54.304374 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 07:36:54 crc kubenswrapper[5043]: I1125 07:36:54.359385 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.418935887 podStartE2EDuration="6.359357302s" podCreationTimestamp="2025-11-25 07:36:48 +0000 UTC" firstStartedPulling="2025-11-25 07:36:49.504740734 +0000 UTC m=+1273.672936455" lastFinishedPulling="2025-11-25 07:36:53.445162139 +0000 UTC m=+1277.613357870" observedRunningTime="2025-11-25 07:36:54.333451983 +0000 UTC m=+1278.501647734" watchObservedRunningTime="2025-11-25 07:36:54.359357302 +0000 UTC m=+1278.527553053" Nov 25 07:37:17 crc kubenswrapper[5043]: I1125 07:37:17.276114 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:37:17 crc kubenswrapper[5043]: I1125 07:37:17.277071 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:37:18 crc kubenswrapper[5043]: I1125 07:37:18.976655 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 07:37:28 crc kubenswrapper[5043]: I1125 07:37:28.602984 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:37:30 crc kubenswrapper[5043]: I1125 07:37:30.210025 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:37:32 crc kubenswrapper[5043]: I1125 07:37:32.820385 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="5f4796f0-ec1b-4f62-bdad-9927841c80db" containerName="rabbitmq" containerID="cri-o://f5f36721d34be995bcec3b093c487cabebec92fb14219129f9b74345b3956dcf" gracePeriod=604796 Nov 25 07:37:34 crc kubenswrapper[5043]: I1125 07:37:34.979541 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="d61213dd-2002-44b6-8904-21c0a754ae66" containerName="rabbitmq" containerID="cri-o://c86d1981443974107f4b4f467115364e3967b5166cbd2571544390fa87322973" gracePeriod=604796 Nov 25 07:37:35 crc kubenswrapper[5043]: I1125 07:37:35.356144 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="5f4796f0-ec1b-4f62-bdad-9927841c80db" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Nov 25 07:37:35 crc kubenswrapper[5043]: I1125 07:37:35.645042 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d61213dd-2002-44b6-8904-21c0a754ae66" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.431077 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.575980 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-tls\") pod \"5f4796f0-ec1b-4f62-bdad-9927841c80db\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.576433 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwh84\" (UniqueName: \"kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-kube-api-access-mwh84\") pod \"5f4796f0-ec1b-4f62-bdad-9927841c80db\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.576488 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f4796f0-ec1b-4f62-bdad-9927841c80db-pod-info\") pod \"5f4796f0-ec1b-4f62-bdad-9927841c80db\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.576596 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-plugins-conf\") pod \"5f4796f0-ec1b-4f62-bdad-9927841c80db\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.576675 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-plugins\") pod \"5f4796f0-ec1b-4f62-bdad-9927841c80db\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.576715 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-erlang-cookie\") pod \"5f4796f0-ec1b-4f62-bdad-9927841c80db\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.576805 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f4796f0-ec1b-4f62-bdad-9927841c80db-erlang-cookie-secret\") pod \"5f4796f0-ec1b-4f62-bdad-9927841c80db\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.576843 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-confd\") pod \"5f4796f0-ec1b-4f62-bdad-9927841c80db\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.576876 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-server-conf\") pod \"5f4796f0-ec1b-4f62-bdad-9927841c80db\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.576906 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-config-data\") pod \"5f4796f0-ec1b-4f62-bdad-9927841c80db\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.577079 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"5f4796f0-ec1b-4f62-bdad-9927841c80db\" (UID: \"5f4796f0-ec1b-4f62-bdad-9927841c80db\") " Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.577221 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5f4796f0-ec1b-4f62-bdad-9927841c80db" (UID: "5f4796f0-ec1b-4f62-bdad-9927841c80db"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.577384 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5f4796f0-ec1b-4f62-bdad-9927841c80db" (UID: "5f4796f0-ec1b-4f62-bdad-9927841c80db"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.577523 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5f4796f0-ec1b-4f62-bdad-9927841c80db" (UID: "5f4796f0-ec1b-4f62-bdad-9927841c80db"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.578150 5043 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.578175 5043 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.578192 5043 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.582457 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5f4796f0-ec1b-4f62-bdad-9927841c80db-pod-info" (OuterVolumeSpecName: "pod-info") pod "5f4796f0-ec1b-4f62-bdad-9927841c80db" (UID: "5f4796f0-ec1b-4f62-bdad-9927841c80db"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.583003 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "5f4796f0-ec1b-4f62-bdad-9927841c80db" (UID: "5f4796f0-ec1b-4f62-bdad-9927841c80db"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.583779 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5f4796f0-ec1b-4f62-bdad-9927841c80db" (UID: "5f4796f0-ec1b-4f62-bdad-9927841c80db"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.595322 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f4796f0-ec1b-4f62-bdad-9927841c80db-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5f4796f0-ec1b-4f62-bdad-9927841c80db" (UID: "5f4796f0-ec1b-4f62-bdad-9927841c80db"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.595585 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-kube-api-access-mwh84" (OuterVolumeSpecName: "kube-api-access-mwh84") pod "5f4796f0-ec1b-4f62-bdad-9927841c80db" (UID: "5f4796f0-ec1b-4f62-bdad-9927841c80db"). InnerVolumeSpecName "kube-api-access-mwh84". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.605955 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-config-data" (OuterVolumeSpecName: "config-data") pod "5f4796f0-ec1b-4f62-bdad-9927841c80db" (UID: "5f4796f0-ec1b-4f62-bdad-9927841c80db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.651374 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-server-conf" (OuterVolumeSpecName: "server-conf") pod "5f4796f0-ec1b-4f62-bdad-9927841c80db" (UID: "5f4796f0-ec1b-4f62-bdad-9927841c80db"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.680111 5043 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.680142 5043 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.680153 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwh84\" (UniqueName: \"kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-kube-api-access-mwh84\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.680163 5043 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5f4796f0-ec1b-4f62-bdad-9927841c80db-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.680174 5043 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5f4796f0-ec1b-4f62-bdad-9927841c80db-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.680184 5043 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.680194 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f4796f0-ec1b-4f62-bdad-9927841c80db-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.692410 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5f4796f0-ec1b-4f62-bdad-9927841c80db" (UID: "5f4796f0-ec1b-4f62-bdad-9927841c80db"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.699508 5043 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.771788 5043 generic.go:334] "Generic (PLEG): container finished" podID="5f4796f0-ec1b-4f62-bdad-9927841c80db" containerID="f5f36721d34be995bcec3b093c487cabebec92fb14219129f9b74345b3956dcf" exitCode=0 Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.771833 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.771837 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5f4796f0-ec1b-4f62-bdad-9927841c80db","Type":"ContainerDied","Data":"f5f36721d34be995bcec3b093c487cabebec92fb14219129f9b74345b3956dcf"} Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.771894 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"5f4796f0-ec1b-4f62-bdad-9927841c80db","Type":"ContainerDied","Data":"3be916abebf188127e2f0f68992f57c060d6aa909f912af80b15209bffc7384c"} Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.771914 5043 scope.go:117] "RemoveContainer" containerID="f5f36721d34be995bcec3b093c487cabebec92fb14219129f9b74345b3956dcf" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.782586 5043 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.782833 5043 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5f4796f0-ec1b-4f62-bdad-9927841c80db-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.808781 5043 scope.go:117] "RemoveContainer" containerID="70089bdd1b0f795e87c83919ae24ce5252b461dfa8f392e8b428fab83c5a3a9b" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.823521 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.833685 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.844975 5043 scope.go:117] "RemoveContainer" containerID="f5f36721d34be995bcec3b093c487cabebec92fb14219129f9b74345b3956dcf" Nov 25 07:37:39 crc kubenswrapper[5043]: E1125 07:37:39.845528 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5f36721d34be995bcec3b093c487cabebec92fb14219129f9b74345b3956dcf\": container with ID starting with f5f36721d34be995bcec3b093c487cabebec92fb14219129f9b74345b3956dcf not found: ID does not exist" containerID="f5f36721d34be995bcec3b093c487cabebec92fb14219129f9b74345b3956dcf" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.845586 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5f36721d34be995bcec3b093c487cabebec92fb14219129f9b74345b3956dcf"} err="failed to get container status \"f5f36721d34be995bcec3b093c487cabebec92fb14219129f9b74345b3956dcf\": rpc error: code = NotFound desc = could not find container \"f5f36721d34be995bcec3b093c487cabebec92fb14219129f9b74345b3956dcf\": container with ID starting with f5f36721d34be995bcec3b093c487cabebec92fb14219129f9b74345b3956dcf not found: ID does not exist" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.845635 5043 scope.go:117] "RemoveContainer" containerID="70089bdd1b0f795e87c83919ae24ce5252b461dfa8f392e8b428fab83c5a3a9b" Nov 25 07:37:39 crc kubenswrapper[5043]: E1125 07:37:39.846174 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70089bdd1b0f795e87c83919ae24ce5252b461dfa8f392e8b428fab83c5a3a9b\": container with ID starting with 70089bdd1b0f795e87c83919ae24ce5252b461dfa8f392e8b428fab83c5a3a9b not found: ID does not exist" containerID="70089bdd1b0f795e87c83919ae24ce5252b461dfa8f392e8b428fab83c5a3a9b" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.846205 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70089bdd1b0f795e87c83919ae24ce5252b461dfa8f392e8b428fab83c5a3a9b"} err="failed to get container status \"70089bdd1b0f795e87c83919ae24ce5252b461dfa8f392e8b428fab83c5a3a9b\": rpc error: code = NotFound desc = could not find container \"70089bdd1b0f795e87c83919ae24ce5252b461dfa8f392e8b428fab83c5a3a9b\": container with ID starting with 70089bdd1b0f795e87c83919ae24ce5252b461dfa8f392e8b428fab83c5a3a9b not found: ID does not exist" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.854723 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:37:39 crc kubenswrapper[5043]: E1125 07:37:39.855253 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f4796f0-ec1b-4f62-bdad-9927841c80db" containerName="setup-container" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.855274 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f4796f0-ec1b-4f62-bdad-9927841c80db" containerName="setup-container" Nov 25 07:37:39 crc kubenswrapper[5043]: E1125 07:37:39.855291 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f4796f0-ec1b-4f62-bdad-9927841c80db" containerName="rabbitmq" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.855299 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f4796f0-ec1b-4f62-bdad-9927841c80db" containerName="rabbitmq" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.855649 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f4796f0-ec1b-4f62-bdad-9927841c80db" containerName="rabbitmq" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.859309 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.864943 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.865211 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.865352 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.865494 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.865739 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.865950 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-67cq4" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.866857 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.870802 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.986222 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.986648 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.986713 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.986815 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-config-data\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.986885 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rchx4\" (UniqueName: \"kubernetes.io/projected/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-kube-api-access-rchx4\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.986921 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.986990 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-server-conf\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.987027 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.987124 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-pod-info\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.987218 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:39 crc kubenswrapper[5043]: I1125 07:37:39.987323 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.089099 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.089193 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.089282 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.089327 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.089372 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.089499 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-config-data\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.089530 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rchx4\" (UniqueName: \"kubernetes.io/projected/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-kube-api-access-rchx4\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.089552 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.089584 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-server-conf\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.089625 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.089655 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-pod-info\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.090590 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.091425 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.092283 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-config-data\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.093145 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.094697 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-server-conf\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.094870 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.100089 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.100400 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.100553 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-pod-info\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.103885 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.109878 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rchx4\" (UniqueName: \"kubernetes.io/projected/edfb7fa8-5582-4faa-9cb2-fbdfffa12d18-kube-api-access-rchx4\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.130889 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18\") " pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.181339 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.735399 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 07:37:40 crc kubenswrapper[5043]: I1125 07:37:40.977632 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f4796f0-ec1b-4f62-bdad-9927841c80db" path="/var/lib/kubelet/pods/5f4796f0-ec1b-4f62-bdad-9927841c80db/volumes" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.589753 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.725645 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-tls\") pod \"d61213dd-2002-44b6-8904-21c0a754ae66\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.725709 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-erlang-cookie\") pod \"d61213dd-2002-44b6-8904-21c0a754ae66\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.725780 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d61213dd-2002-44b6-8904-21c0a754ae66-pod-info\") pod \"d61213dd-2002-44b6-8904-21c0a754ae66\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.725805 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d61213dd-2002-44b6-8904-21c0a754ae66-erlang-cookie-secret\") pod \"d61213dd-2002-44b6-8904-21c0a754ae66\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.725842 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-confd\") pod \"d61213dd-2002-44b6-8904-21c0a754ae66\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.725893 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8qjv\" (UniqueName: \"kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-kube-api-access-s8qjv\") pod \"d61213dd-2002-44b6-8904-21c0a754ae66\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.725947 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-plugins\") pod \"d61213dd-2002-44b6-8904-21c0a754ae66\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.725972 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-config-data\") pod \"d61213dd-2002-44b6-8904-21c0a754ae66\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.725994 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-plugins-conf\") pod \"d61213dd-2002-44b6-8904-21c0a754ae66\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.726014 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-server-conf\") pod \"d61213dd-2002-44b6-8904-21c0a754ae66\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.726035 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"d61213dd-2002-44b6-8904-21c0a754ae66\" (UID: \"d61213dd-2002-44b6-8904-21c0a754ae66\") " Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.726758 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d61213dd-2002-44b6-8904-21c0a754ae66" (UID: "d61213dd-2002-44b6-8904-21c0a754ae66"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.726852 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d61213dd-2002-44b6-8904-21c0a754ae66" (UID: "d61213dd-2002-44b6-8904-21c0a754ae66"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.727165 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d61213dd-2002-44b6-8904-21c0a754ae66" (UID: "d61213dd-2002-44b6-8904-21c0a754ae66"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.731400 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-kube-api-access-s8qjv" (OuterVolumeSpecName: "kube-api-access-s8qjv") pod "d61213dd-2002-44b6-8904-21c0a754ae66" (UID: "d61213dd-2002-44b6-8904-21c0a754ae66"). InnerVolumeSpecName "kube-api-access-s8qjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.731672 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d61213dd-2002-44b6-8904-21c0a754ae66" (UID: "d61213dd-2002-44b6-8904-21c0a754ae66"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.731967 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "d61213dd-2002-44b6-8904-21c0a754ae66" (UID: "d61213dd-2002-44b6-8904-21c0a754ae66"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.742020 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d61213dd-2002-44b6-8904-21c0a754ae66-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d61213dd-2002-44b6-8904-21c0a754ae66" (UID: "d61213dd-2002-44b6-8904-21c0a754ae66"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.742854 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d61213dd-2002-44b6-8904-21c0a754ae66-pod-info" (OuterVolumeSpecName: "pod-info") pod "d61213dd-2002-44b6-8904-21c0a754ae66" (UID: "d61213dd-2002-44b6-8904-21c0a754ae66"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.760399 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-config-data" (OuterVolumeSpecName: "config-data") pod "d61213dd-2002-44b6-8904-21c0a754ae66" (UID: "d61213dd-2002-44b6-8904-21c0a754ae66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.815237 5043 generic.go:334] "Generic (PLEG): container finished" podID="d61213dd-2002-44b6-8904-21c0a754ae66" containerID="c86d1981443974107f4b4f467115364e3967b5166cbd2571544390fa87322973" exitCode=0 Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.815311 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d61213dd-2002-44b6-8904-21c0a754ae66","Type":"ContainerDied","Data":"c86d1981443974107f4b4f467115364e3967b5166cbd2571544390fa87322973"} Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.815339 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d61213dd-2002-44b6-8904-21c0a754ae66","Type":"ContainerDied","Data":"c4aecb1b8b6e8eb92ce822b4563ba79c5fcfde9840c43c87e738769a9e5b8f5b"} Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.815383 5043 scope.go:117] "RemoveContainer" containerID="c86d1981443974107f4b4f467115364e3967b5166cbd2571544390fa87322973" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.815581 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.816500 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18","Type":"ContainerStarted","Data":"01c33d839201abb7c805bf4d6c9014f8f53e0ece840d149862e547aff06d1f76"} Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.828325 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8qjv\" (UniqueName: \"kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-kube-api-access-s8qjv\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.828361 5043 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.828371 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.828380 5043 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.828410 5043 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.828419 5043 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.828430 5043 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.828438 5043 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d61213dd-2002-44b6-8904-21c0a754ae66-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.828446 5043 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d61213dd-2002-44b6-8904-21c0a754ae66-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.843787 5043 scope.go:117] "RemoveContainer" containerID="c28319fb13ea4ff76aa875432a39af443bf02c9985bfe22b166b3cfde0e83ea8" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.852317 5043 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.868880 5043 scope.go:117] "RemoveContainer" containerID="c86d1981443974107f4b4f467115364e3967b5166cbd2571544390fa87322973" Nov 25 07:37:41 crc kubenswrapper[5043]: E1125 07:37:41.869374 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c86d1981443974107f4b4f467115364e3967b5166cbd2571544390fa87322973\": container with ID starting with c86d1981443974107f4b4f467115364e3967b5166cbd2571544390fa87322973 not found: ID does not exist" containerID="c86d1981443974107f4b4f467115364e3967b5166cbd2571544390fa87322973" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.869462 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c86d1981443974107f4b4f467115364e3967b5166cbd2571544390fa87322973"} err="failed to get container status \"c86d1981443974107f4b4f467115364e3967b5166cbd2571544390fa87322973\": rpc error: code = NotFound desc = could not find container \"c86d1981443974107f4b4f467115364e3967b5166cbd2571544390fa87322973\": container with ID starting with c86d1981443974107f4b4f467115364e3967b5166cbd2571544390fa87322973 not found: ID does not exist" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.869539 5043 scope.go:117] "RemoveContainer" containerID="c28319fb13ea4ff76aa875432a39af443bf02c9985bfe22b166b3cfde0e83ea8" Nov 25 07:37:41 crc kubenswrapper[5043]: E1125 07:37:41.869990 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c28319fb13ea4ff76aa875432a39af443bf02c9985bfe22b166b3cfde0e83ea8\": container with ID starting with c28319fb13ea4ff76aa875432a39af443bf02c9985bfe22b166b3cfde0e83ea8 not found: ID does not exist" containerID="c28319fb13ea4ff76aa875432a39af443bf02c9985bfe22b166b3cfde0e83ea8" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.870038 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28319fb13ea4ff76aa875432a39af443bf02c9985bfe22b166b3cfde0e83ea8"} err="failed to get container status \"c28319fb13ea4ff76aa875432a39af443bf02c9985bfe22b166b3cfde0e83ea8\": rpc error: code = NotFound desc = could not find container \"c28319fb13ea4ff76aa875432a39af443bf02c9985bfe22b166b3cfde0e83ea8\": container with ID starting with c28319fb13ea4ff76aa875432a39af443bf02c9985bfe22b166b3cfde0e83ea8 not found: ID does not exist" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.876408 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d61213dd-2002-44b6-8904-21c0a754ae66" (UID: "d61213dd-2002-44b6-8904-21c0a754ae66"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.930291 5043 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.930635 5043 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d61213dd-2002-44b6-8904-21c0a754ae66-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:41 crc kubenswrapper[5043]: I1125 07:37:41.980773 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-server-conf" (OuterVolumeSpecName: "server-conf") pod "d61213dd-2002-44b6-8904-21c0a754ae66" (UID: "d61213dd-2002-44b6-8904-21c0a754ae66"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.032389 5043 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d61213dd-2002-44b6-8904-21c0a754ae66-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.376913 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.386576 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.413963 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:37:42 crc kubenswrapper[5043]: E1125 07:37:42.414405 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61213dd-2002-44b6-8904-21c0a754ae66" containerName="rabbitmq" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.414427 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61213dd-2002-44b6-8904-21c0a754ae66" containerName="rabbitmq" Nov 25 07:37:42 crc kubenswrapper[5043]: E1125 07:37:42.414445 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d61213dd-2002-44b6-8904-21c0a754ae66" containerName="setup-container" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.414454 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="d61213dd-2002-44b6-8904-21c0a754ae66" containerName="setup-container" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.414754 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="d61213dd-2002-44b6-8904-21c0a754ae66" containerName="rabbitmq" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.416320 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.419209 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.419430 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.419686 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.420786 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.421017 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.423122 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.423279 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rkxqg" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.431040 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.542587 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.542881 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.542936 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb926\" (UniqueName: \"kubernetes.io/projected/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-kube-api-access-xb926\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.543010 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.543038 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.543161 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.543354 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.543449 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.543580 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.543723 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.543781 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.645465 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.645501 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.645521 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.645558 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.645576 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.645624 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.645647 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.645668 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.645696 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.645729 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.645762 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb926\" (UniqueName: \"kubernetes.io/projected/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-kube-api-access-xb926\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.646398 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.646483 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.647009 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.647496 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.647649 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.648836 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.650679 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.650691 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.653354 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.658514 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.664565 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb926\" (UniqueName: \"kubernetes.io/projected/96b0381f-3d56-49b8-8a21-0b8c1bd593c2-kube-api-access-xb926\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.678162 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"96b0381f-3d56-49b8-8a21-0b8c1bd593c2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.743404 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.837938 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18","Type":"ContainerStarted","Data":"2f48881c3a1a0c2fbdde344c4d0b18fb200c376ef2bca6d7315d7b1d276e6d82"} Nov 25 07:37:42 crc kubenswrapper[5043]: I1125 07:37:42.978951 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d61213dd-2002-44b6-8904-21c0a754ae66" path="/var/lib/kubelet/pods/d61213dd-2002-44b6-8904-21c0a754ae66/volumes" Nov 25 07:37:43 crc kubenswrapper[5043]: I1125 07:37:43.241823 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 07:37:43 crc kubenswrapper[5043]: W1125 07:37:43.248912 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96b0381f_3d56_49b8_8a21_0b8c1bd593c2.slice/crio-237533709da3c7bb9c0176a4fab64b96a3e1a60e4aa9ade578490ffbe09a2cf6 WatchSource:0}: Error finding container 237533709da3c7bb9c0176a4fab64b96a3e1a60e4aa9ade578490ffbe09a2cf6: Status 404 returned error can't find the container with id 237533709da3c7bb9c0176a4fab64b96a3e1a60e4aa9ade578490ffbe09a2cf6 Nov 25 07:37:43 crc kubenswrapper[5043]: I1125 07:37:43.849255 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"96b0381f-3d56-49b8-8a21-0b8c1bd593c2","Type":"ContainerStarted","Data":"237533709da3c7bb9c0176a4fab64b96a3e1a60e4aa9ade578490ffbe09a2cf6"} Nov 25 07:37:45 crc kubenswrapper[5043]: I1125 07:37:45.866702 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"96b0381f-3d56-49b8-8a21-0b8c1bd593c2","Type":"ContainerStarted","Data":"8fa0024e6e5a5b9831b8c586deb788a2a1fc579581bb789d71e6056eabf171a7"} Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.139194 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64b6dd64c5-hrsvt"] Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.140587 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.144095 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.162818 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64b6dd64c5-hrsvt"] Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.218831 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz9s6\" (UniqueName: \"kubernetes.io/projected/66c1eed8-8f68-451c-b6a1-c22640a73999-kube-api-access-gz9s6\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.218914 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-ovsdbserver-sb\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.218945 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-dns-svc\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.218983 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-ovsdbserver-nb\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.219070 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-openstack-edpm-ipam\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.219145 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-config\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.320396 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-openstack-edpm-ipam\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.320478 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-config\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.320546 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz9s6\" (UniqueName: \"kubernetes.io/projected/66c1eed8-8f68-451c-b6a1-c22640a73999-kube-api-access-gz9s6\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.320575 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-ovsdbserver-sb\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.320594 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-dns-svc\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.320699 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-ovsdbserver-nb\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.321425 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-config\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.321430 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-openstack-edpm-ipam\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.321486 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-ovsdbserver-nb\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.321913 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-ovsdbserver-sb\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.322173 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-dns-svc\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.350375 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz9s6\" (UniqueName: \"kubernetes.io/projected/66c1eed8-8f68-451c-b6a1-c22640a73999-kube-api-access-gz9s6\") pod \"dnsmasq-dns-64b6dd64c5-hrsvt\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.461056 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:46 crc kubenswrapper[5043]: W1125 07:37:46.932288 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66c1eed8_8f68_451c_b6a1_c22640a73999.slice/crio-753143bebdc2b3e1eeff6e03653f2e00b71883df0be373be8ab19bb1ebc2aa98 WatchSource:0}: Error finding container 753143bebdc2b3e1eeff6e03653f2e00b71883df0be373be8ab19bb1ebc2aa98: Status 404 returned error can't find the container with id 753143bebdc2b3e1eeff6e03653f2e00b71883df0be373be8ab19bb1ebc2aa98 Nov 25 07:37:46 crc kubenswrapper[5043]: I1125 07:37:46.938985 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64b6dd64c5-hrsvt"] Nov 25 07:37:47 crc kubenswrapper[5043]: I1125 07:37:47.276282 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:37:47 crc kubenswrapper[5043]: I1125 07:37:47.276678 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:37:47 crc kubenswrapper[5043]: I1125 07:37:47.895573 5043 generic.go:334] "Generic (PLEG): container finished" podID="66c1eed8-8f68-451c-b6a1-c22640a73999" containerID="1927a6a36ef4f229a761b33013222a85a7fab229588257b445f1a6927de8bafd" exitCode=0 Nov 25 07:37:47 crc kubenswrapper[5043]: I1125 07:37:47.895673 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" event={"ID":"66c1eed8-8f68-451c-b6a1-c22640a73999","Type":"ContainerDied","Data":"1927a6a36ef4f229a761b33013222a85a7fab229588257b445f1a6927de8bafd"} Nov 25 07:37:47 crc kubenswrapper[5043]: I1125 07:37:47.895714 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" event={"ID":"66c1eed8-8f68-451c-b6a1-c22640a73999","Type":"ContainerStarted","Data":"753143bebdc2b3e1eeff6e03653f2e00b71883df0be373be8ab19bb1ebc2aa98"} Nov 25 07:37:48 crc kubenswrapper[5043]: I1125 07:37:48.916301 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" event={"ID":"66c1eed8-8f68-451c-b6a1-c22640a73999","Type":"ContainerStarted","Data":"9ece41ab38054bd5d025af89e3e1305e1910560e38ae0a9761401f350c18a771"} Nov 25 07:37:48 crc kubenswrapper[5043]: I1125 07:37:48.916906 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:48 crc kubenswrapper[5043]: I1125 07:37:48.956209 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" podStartSLOduration=2.956181655 podStartE2EDuration="2.956181655s" podCreationTimestamp="2025-11-25 07:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:37:48.946079972 +0000 UTC m=+1333.114275723" watchObservedRunningTime="2025-11-25 07:37:48.956181655 +0000 UTC m=+1333.124377416" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.463370 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.520838 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f95c456cf-t8252"] Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.521122 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f95c456cf-t8252" podUID="ac0e4200-6d85-4113-b59d-24a25fb39340" containerName="dnsmasq-dns" containerID="cri-o://eca358f1661ec4e0fd8e099ecf980430ebfe1ce60a9a7f056ba054739d6aa7dc" gracePeriod=10 Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.651914 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c58867b6c-r79jq"] Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.653687 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.666390 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c58867b6c-r79jq"] Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.755319 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkfj7\" (UniqueName: \"kubernetes.io/projected/db606dbd-d686-4cd3-bf58-84f1199a3c36-kube-api-access-xkfj7\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.755515 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-ovsdbserver-sb\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.755697 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-ovsdbserver-nb\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.755743 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-openstack-edpm-ipam\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.755855 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-config\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.756016 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-dns-svc\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.857510 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-dns-svc\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.857876 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkfj7\" (UniqueName: \"kubernetes.io/projected/db606dbd-d686-4cd3-bf58-84f1199a3c36-kube-api-access-xkfj7\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.857925 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-ovsdbserver-sb\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.857999 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-ovsdbserver-nb\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.858032 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-openstack-edpm-ipam\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.858151 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-config\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.858753 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-dns-svc\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.858945 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-config\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.859504 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-ovsdbserver-sb\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.859639 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-ovsdbserver-nb\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.860484 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-openstack-edpm-ipam\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:56 crc kubenswrapper[5043]: I1125 07:37:56.880508 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkfj7\" (UniqueName: \"kubernetes.io/projected/db606dbd-d686-4cd3-bf58-84f1199a3c36-kube-api-access-xkfj7\") pod \"dnsmasq-dns-c58867b6c-r79jq\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.013064 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.019488 5043 generic.go:334] "Generic (PLEG): container finished" podID="ac0e4200-6d85-4113-b59d-24a25fb39340" containerID="eca358f1661ec4e0fd8e099ecf980430ebfe1ce60a9a7f056ba054739d6aa7dc" exitCode=0 Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.019524 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f95c456cf-t8252" event={"ID":"ac0e4200-6d85-4113-b59d-24a25fb39340","Type":"ContainerDied","Data":"eca358f1661ec4e0fd8e099ecf980430ebfe1ce60a9a7f056ba054739d6aa7dc"} Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.019547 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f95c456cf-t8252" event={"ID":"ac0e4200-6d85-4113-b59d-24a25fb39340","Type":"ContainerDied","Data":"1974239cf3b7358c0630fc2bbd350c896cfe4b07a9ae6fdfeb48bba3d4679f23"} Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.019557 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1974239cf3b7358c0630fc2bbd350c896cfe4b07a9ae6fdfeb48bba3d4679f23" Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.089574 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.197795 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-ovsdbserver-sb\") pod \"ac0e4200-6d85-4113-b59d-24a25fb39340\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.198260 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-config\") pod \"ac0e4200-6d85-4113-b59d-24a25fb39340\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.198294 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-ovsdbserver-nb\") pod \"ac0e4200-6d85-4113-b59d-24a25fb39340\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.198382 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-dns-svc\") pod \"ac0e4200-6d85-4113-b59d-24a25fb39340\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.198404 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62mbg\" (UniqueName: \"kubernetes.io/projected/ac0e4200-6d85-4113-b59d-24a25fb39340-kube-api-access-62mbg\") pod \"ac0e4200-6d85-4113-b59d-24a25fb39340\" (UID: \"ac0e4200-6d85-4113-b59d-24a25fb39340\") " Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.203183 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac0e4200-6d85-4113-b59d-24a25fb39340-kube-api-access-62mbg" (OuterVolumeSpecName: "kube-api-access-62mbg") pod "ac0e4200-6d85-4113-b59d-24a25fb39340" (UID: "ac0e4200-6d85-4113-b59d-24a25fb39340"). InnerVolumeSpecName "kube-api-access-62mbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.242622 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-config" (OuterVolumeSpecName: "config") pod "ac0e4200-6d85-4113-b59d-24a25fb39340" (UID: "ac0e4200-6d85-4113-b59d-24a25fb39340"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.243720 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac0e4200-6d85-4113-b59d-24a25fb39340" (UID: "ac0e4200-6d85-4113-b59d-24a25fb39340"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.247421 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac0e4200-6d85-4113-b59d-24a25fb39340" (UID: "ac0e4200-6d85-4113-b59d-24a25fb39340"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.254242 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac0e4200-6d85-4113-b59d-24a25fb39340" (UID: "ac0e4200-6d85-4113-b59d-24a25fb39340"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.300736 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.300847 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.300865 5043 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.300878 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62mbg\" (UniqueName: \"kubernetes.io/projected/ac0e4200-6d85-4113-b59d-24a25fb39340-kube-api-access-62mbg\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.300892 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac0e4200-6d85-4113-b59d-24a25fb39340-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:37:57 crc kubenswrapper[5043]: W1125 07:37:57.454392 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb606dbd_d686_4cd3_bf58_84f1199a3c36.slice/crio-82f547208b36028238c07c41527f3af6f2ad9f05db3a180bad4c11a22da6c9e7 WatchSource:0}: Error finding container 82f547208b36028238c07c41527f3af6f2ad9f05db3a180bad4c11a22da6c9e7: Status 404 returned error can't find the container with id 82f547208b36028238c07c41527f3af6f2ad9f05db3a180bad4c11a22da6c9e7 Nov 25 07:37:57 crc kubenswrapper[5043]: I1125 07:37:57.454682 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c58867b6c-r79jq"] Nov 25 07:37:58 crc kubenswrapper[5043]: I1125 07:37:58.031985 5043 generic.go:334] "Generic (PLEG): container finished" podID="db606dbd-d686-4cd3-bf58-84f1199a3c36" containerID="e46c5e0dde85823dec7b6e2a091512a70e9e758a94ad6b0c10ba2f3eca2a13a2" exitCode=0 Nov 25 07:37:58 crc kubenswrapper[5043]: I1125 07:37:58.032476 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f95c456cf-t8252" Nov 25 07:37:58 crc kubenswrapper[5043]: I1125 07:37:58.032125 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c58867b6c-r79jq" event={"ID":"db606dbd-d686-4cd3-bf58-84f1199a3c36","Type":"ContainerDied","Data":"e46c5e0dde85823dec7b6e2a091512a70e9e758a94ad6b0c10ba2f3eca2a13a2"} Nov 25 07:37:58 crc kubenswrapper[5043]: I1125 07:37:58.032593 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c58867b6c-r79jq" event={"ID":"db606dbd-d686-4cd3-bf58-84f1199a3c36","Type":"ContainerStarted","Data":"82f547208b36028238c07c41527f3af6f2ad9f05db3a180bad4c11a22da6c9e7"} Nov 25 07:37:58 crc kubenswrapper[5043]: I1125 07:37:58.200467 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f95c456cf-t8252"] Nov 25 07:37:58 crc kubenswrapper[5043]: I1125 07:37:58.209469 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f95c456cf-t8252"] Nov 25 07:37:58 crc kubenswrapper[5043]: I1125 07:37:58.973996 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac0e4200-6d85-4113-b59d-24a25fb39340" path="/var/lib/kubelet/pods/ac0e4200-6d85-4113-b59d-24a25fb39340/volumes" Nov 25 07:37:59 crc kubenswrapper[5043]: I1125 07:37:59.053792 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c58867b6c-r79jq" event={"ID":"db606dbd-d686-4cd3-bf58-84f1199a3c36","Type":"ContainerStarted","Data":"e235a5221ce9a5313e55416824306364aacd6c9bd40454557ff02f1a4eb9eec3"} Nov 25 07:37:59 crc kubenswrapper[5043]: I1125 07:37:59.053913 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:37:59 crc kubenswrapper[5043]: I1125 07:37:59.085106 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c58867b6c-r79jq" podStartSLOduration=3.085082564 podStartE2EDuration="3.085082564s" podCreationTimestamp="2025-11-25 07:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:37:59.080879751 +0000 UTC m=+1343.249075612" watchObservedRunningTime="2025-11-25 07:37:59.085082564 +0000 UTC m=+1343.253278325" Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.014871 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.133566 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64b6dd64c5-hrsvt"] Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.133828 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" podUID="66c1eed8-8f68-451c-b6a1-c22640a73999" containerName="dnsmasq-dns" containerID="cri-o://9ece41ab38054bd5d025af89e3e1305e1910560e38ae0a9761401f350c18a771" gracePeriod=10 Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.588115 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.715810 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-ovsdbserver-sb\") pod \"66c1eed8-8f68-451c-b6a1-c22640a73999\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.715874 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz9s6\" (UniqueName: \"kubernetes.io/projected/66c1eed8-8f68-451c-b6a1-c22640a73999-kube-api-access-gz9s6\") pod \"66c1eed8-8f68-451c-b6a1-c22640a73999\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.715898 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-ovsdbserver-nb\") pod \"66c1eed8-8f68-451c-b6a1-c22640a73999\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.716048 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-config\") pod \"66c1eed8-8f68-451c-b6a1-c22640a73999\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.716126 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-dns-svc\") pod \"66c1eed8-8f68-451c-b6a1-c22640a73999\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.716154 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-openstack-edpm-ipam\") pod \"66c1eed8-8f68-451c-b6a1-c22640a73999\" (UID: \"66c1eed8-8f68-451c-b6a1-c22640a73999\") " Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.727765 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66c1eed8-8f68-451c-b6a1-c22640a73999-kube-api-access-gz9s6" (OuterVolumeSpecName: "kube-api-access-gz9s6") pod "66c1eed8-8f68-451c-b6a1-c22640a73999" (UID: "66c1eed8-8f68-451c-b6a1-c22640a73999"). InnerVolumeSpecName "kube-api-access-gz9s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.757993 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "66c1eed8-8f68-451c-b6a1-c22640a73999" (UID: "66c1eed8-8f68-451c-b6a1-c22640a73999"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.758570 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "66c1eed8-8f68-451c-b6a1-c22640a73999" (UID: "66c1eed8-8f68-451c-b6a1-c22640a73999"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.763459 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "66c1eed8-8f68-451c-b6a1-c22640a73999" (UID: "66c1eed8-8f68-451c-b6a1-c22640a73999"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.766265 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-config" (OuterVolumeSpecName: "config") pod "66c1eed8-8f68-451c-b6a1-c22640a73999" (UID: "66c1eed8-8f68-451c-b6a1-c22640a73999"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.771415 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "66c1eed8-8f68-451c-b6a1-c22640a73999" (UID: "66c1eed8-8f68-451c-b6a1-c22640a73999"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.819636 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-config\") on node \"crc\" DevicePath \"\"" Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.819673 5043 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.819683 5043 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.819694 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.819702 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz9s6\" (UniqueName: \"kubernetes.io/projected/66c1eed8-8f68-451c-b6a1-c22640a73999-kube-api-access-gz9s6\") on node \"crc\" DevicePath \"\"" Nov 25 07:38:07 crc kubenswrapper[5043]: I1125 07:38:07.819710 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/66c1eed8-8f68-451c-b6a1-c22640a73999-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 07:38:08 crc kubenswrapper[5043]: I1125 07:38:08.173492 5043 generic.go:334] "Generic (PLEG): container finished" podID="66c1eed8-8f68-451c-b6a1-c22640a73999" containerID="9ece41ab38054bd5d025af89e3e1305e1910560e38ae0a9761401f350c18a771" exitCode=0 Nov 25 07:38:08 crc kubenswrapper[5043]: I1125 07:38:08.173539 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" event={"ID":"66c1eed8-8f68-451c-b6a1-c22640a73999","Type":"ContainerDied","Data":"9ece41ab38054bd5d025af89e3e1305e1910560e38ae0a9761401f350c18a771"} Nov 25 07:38:08 crc kubenswrapper[5043]: I1125 07:38:08.173561 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" Nov 25 07:38:08 crc kubenswrapper[5043]: I1125 07:38:08.173582 5043 scope.go:117] "RemoveContainer" containerID="9ece41ab38054bd5d025af89e3e1305e1910560e38ae0a9761401f350c18a771" Nov 25 07:38:08 crc kubenswrapper[5043]: I1125 07:38:08.173569 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64b6dd64c5-hrsvt" event={"ID":"66c1eed8-8f68-451c-b6a1-c22640a73999","Type":"ContainerDied","Data":"753143bebdc2b3e1eeff6e03653f2e00b71883df0be373be8ab19bb1ebc2aa98"} Nov 25 07:38:08 crc kubenswrapper[5043]: I1125 07:38:08.196451 5043 scope.go:117] "RemoveContainer" containerID="1927a6a36ef4f229a761b33013222a85a7fab229588257b445f1a6927de8bafd" Nov 25 07:38:08 crc kubenswrapper[5043]: I1125 07:38:08.218875 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64b6dd64c5-hrsvt"] Nov 25 07:38:08 crc kubenswrapper[5043]: I1125 07:38:08.225707 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64b6dd64c5-hrsvt"] Nov 25 07:38:08 crc kubenswrapper[5043]: I1125 07:38:08.253729 5043 scope.go:117] "RemoveContainer" containerID="9ece41ab38054bd5d025af89e3e1305e1910560e38ae0a9761401f350c18a771" Nov 25 07:38:08 crc kubenswrapper[5043]: E1125 07:38:08.254339 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ece41ab38054bd5d025af89e3e1305e1910560e38ae0a9761401f350c18a771\": container with ID starting with 9ece41ab38054bd5d025af89e3e1305e1910560e38ae0a9761401f350c18a771 not found: ID does not exist" containerID="9ece41ab38054bd5d025af89e3e1305e1910560e38ae0a9761401f350c18a771" Nov 25 07:38:08 crc kubenswrapper[5043]: I1125 07:38:08.254378 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ece41ab38054bd5d025af89e3e1305e1910560e38ae0a9761401f350c18a771"} err="failed to get container status \"9ece41ab38054bd5d025af89e3e1305e1910560e38ae0a9761401f350c18a771\": rpc error: code = NotFound desc = could not find container \"9ece41ab38054bd5d025af89e3e1305e1910560e38ae0a9761401f350c18a771\": container with ID starting with 9ece41ab38054bd5d025af89e3e1305e1910560e38ae0a9761401f350c18a771 not found: ID does not exist" Nov 25 07:38:08 crc kubenswrapper[5043]: I1125 07:38:08.254406 5043 scope.go:117] "RemoveContainer" containerID="1927a6a36ef4f229a761b33013222a85a7fab229588257b445f1a6927de8bafd" Nov 25 07:38:08 crc kubenswrapper[5043]: E1125 07:38:08.255047 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1927a6a36ef4f229a761b33013222a85a7fab229588257b445f1a6927de8bafd\": container with ID starting with 1927a6a36ef4f229a761b33013222a85a7fab229588257b445f1a6927de8bafd not found: ID does not exist" containerID="1927a6a36ef4f229a761b33013222a85a7fab229588257b445f1a6927de8bafd" Nov 25 07:38:08 crc kubenswrapper[5043]: I1125 07:38:08.255243 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1927a6a36ef4f229a761b33013222a85a7fab229588257b445f1a6927de8bafd"} err="failed to get container status \"1927a6a36ef4f229a761b33013222a85a7fab229588257b445f1a6927de8bafd\": rpc error: code = NotFound desc = could not find container \"1927a6a36ef4f229a761b33013222a85a7fab229588257b445f1a6927de8bafd\": container with ID starting with 1927a6a36ef4f229a761b33013222a85a7fab229588257b445f1a6927de8bafd not found: ID does not exist" Nov 25 07:38:08 crc kubenswrapper[5043]: I1125 07:38:08.981327 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66c1eed8-8f68-451c-b6a1-c22640a73999" path="/var/lib/kubelet/pods/66c1eed8-8f68-451c-b6a1-c22640a73999/volumes" Nov 25 07:38:16 crc kubenswrapper[5043]: I1125 07:38:16.307951 5043 generic.go:334] "Generic (PLEG): container finished" podID="edfb7fa8-5582-4faa-9cb2-fbdfffa12d18" containerID="2f48881c3a1a0c2fbdde344c4d0b18fb200c376ef2bca6d7315d7b1d276e6d82" exitCode=0 Nov 25 07:38:16 crc kubenswrapper[5043]: I1125 07:38:16.308140 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18","Type":"ContainerDied","Data":"2f48881c3a1a0c2fbdde344c4d0b18fb200c376ef2bca6d7315d7b1d276e6d82"} Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.249470 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf"] Nov 25 07:38:17 crc kubenswrapper[5043]: E1125 07:38:17.250479 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c1eed8-8f68-451c-b6a1-c22640a73999" containerName="init" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.250509 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c1eed8-8f68-451c-b6a1-c22640a73999" containerName="init" Nov 25 07:38:17 crc kubenswrapper[5043]: E1125 07:38:17.250546 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac0e4200-6d85-4113-b59d-24a25fb39340" containerName="init" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.250558 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac0e4200-6d85-4113-b59d-24a25fb39340" containerName="init" Nov 25 07:38:17 crc kubenswrapper[5043]: E1125 07:38:17.250602 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac0e4200-6d85-4113-b59d-24a25fb39340" containerName="dnsmasq-dns" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.250637 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac0e4200-6d85-4113-b59d-24a25fb39340" containerName="dnsmasq-dns" Nov 25 07:38:17 crc kubenswrapper[5043]: E1125 07:38:17.250659 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c1eed8-8f68-451c-b6a1-c22640a73999" containerName="dnsmasq-dns" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.250672 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c1eed8-8f68-451c-b6a1-c22640a73999" containerName="dnsmasq-dns" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.251016 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="66c1eed8-8f68-451c-b6a1-c22640a73999" containerName="dnsmasq-dns" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.251064 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac0e4200-6d85-4113-b59d-24a25fb39340" containerName="dnsmasq-dns" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.252243 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.255807 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.257856 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.258410 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.259171 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.276103 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.276174 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.276239 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.277416 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a39678aadaf4f8799d011d172223bff66847f6049bb09f87a23b01f3ae1af7cd"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.277533 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://a39678aadaf4f8799d011d172223bff66847f6049bb09f87a23b01f3ae1af7cd" gracePeriod=600 Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.281177 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf"] Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.326154 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"edfb7fa8-5582-4faa-9cb2-fbdfffa12d18","Type":"ContainerStarted","Data":"87f6bfbed687312d5ece40e8dfb880fe00a479d8d053a9f11a86e50a771f6785"} Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.329701 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.354060 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.354042233 podStartE2EDuration="38.354042233s" podCreationTimestamp="2025-11-25 07:37:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:38:17.35168138 +0000 UTC m=+1361.519877111" watchObservedRunningTime="2025-11-25 07:38:17.354042233 +0000 UTC m=+1361.522237954" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.398239 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qzqd\" (UniqueName: \"kubernetes.io/projected/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-kube-api-access-9qzqd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf\" (UID: \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.398350 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf\" (UID: \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.398418 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf\" (UID: \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.398441 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf\" (UID: \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.500331 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf\" (UID: \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.501213 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf\" (UID: \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.501276 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf\" (UID: \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.501784 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qzqd\" (UniqueName: \"kubernetes.io/projected/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-kube-api-access-9qzqd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf\" (UID: \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.507285 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf\" (UID: \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.507466 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf\" (UID: \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.510550 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf\" (UID: \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.523816 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qzqd\" (UniqueName: \"kubernetes.io/projected/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-kube-api-access-9qzqd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf\" (UID: \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" Nov 25 07:38:17 crc kubenswrapper[5043]: I1125 07:38:17.572849 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" Nov 25 07:38:18 crc kubenswrapper[5043]: I1125 07:38:18.160168 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf"] Nov 25 07:38:18 crc kubenswrapper[5043]: W1125 07:38:18.160651 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1768074_dbe7_4cd0_b646_d4cb304ee5b4.slice/crio-1f402e9880b16772b9f8fa1cc0ffb4d12a74f3fa571c80e292b8568a77685fad WatchSource:0}: Error finding container 1f402e9880b16772b9f8fa1cc0ffb4d12a74f3fa571c80e292b8568a77685fad: Status 404 returned error can't find the container with id 1f402e9880b16772b9f8fa1cc0ffb4d12a74f3fa571c80e292b8568a77685fad Nov 25 07:38:18 crc kubenswrapper[5043]: I1125 07:38:18.341700 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="a39678aadaf4f8799d011d172223bff66847f6049bb09f87a23b01f3ae1af7cd" exitCode=0 Nov 25 07:38:18 crc kubenswrapper[5043]: I1125 07:38:18.341773 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"a39678aadaf4f8799d011d172223bff66847f6049bb09f87a23b01f3ae1af7cd"} Nov 25 07:38:18 crc kubenswrapper[5043]: I1125 07:38:18.341822 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474"} Nov 25 07:38:18 crc kubenswrapper[5043]: I1125 07:38:18.341843 5043 scope.go:117] "RemoveContainer" containerID="296a5a98c9bf0bfd6085b02df2f0073364b7097e82673a35c0ff9f12b1b73d01" Nov 25 07:38:18 crc kubenswrapper[5043]: I1125 07:38:18.343425 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" event={"ID":"f1768074-dbe7-4cd0-b646-d4cb304ee5b4","Type":"ContainerStarted","Data":"1f402e9880b16772b9f8fa1cc0ffb4d12a74f3fa571c80e292b8568a77685fad"} Nov 25 07:38:18 crc kubenswrapper[5043]: I1125 07:38:18.346246 5043 generic.go:334] "Generic (PLEG): container finished" podID="96b0381f-3d56-49b8-8a21-0b8c1bd593c2" containerID="8fa0024e6e5a5b9831b8c586deb788a2a1fc579581bb789d71e6056eabf171a7" exitCode=0 Nov 25 07:38:18 crc kubenswrapper[5043]: I1125 07:38:18.346313 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"96b0381f-3d56-49b8-8a21-0b8c1bd593c2","Type":"ContainerDied","Data":"8fa0024e6e5a5b9831b8c586deb788a2a1fc579581bb789d71e6056eabf171a7"} Nov 25 07:38:19 crc kubenswrapper[5043]: I1125 07:38:19.364762 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"96b0381f-3d56-49b8-8a21-0b8c1bd593c2","Type":"ContainerStarted","Data":"c8409642b185f7bd32f9b7e9c2f920275cb59a06e2d5adfb12ac47d4ca34d197"} Nov 25 07:38:19 crc kubenswrapper[5043]: I1125 07:38:19.366392 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:38:19 crc kubenswrapper[5043]: I1125 07:38:19.401004 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.400972339 podStartE2EDuration="37.400972339s" podCreationTimestamp="2025-11-25 07:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 07:38:19.397173337 +0000 UTC m=+1363.565369058" watchObservedRunningTime="2025-11-25 07:38:19.400972339 +0000 UTC m=+1363.569168060" Nov 25 07:38:28 crc kubenswrapper[5043]: I1125 07:38:28.459639 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" event={"ID":"f1768074-dbe7-4cd0-b646-d4cb304ee5b4","Type":"ContainerStarted","Data":"bc6a1e06b7e52f7c8a8aa67f1debf32b0f1cc3498ea73ecb450eb9e4e84d7bae"} Nov 25 07:38:28 crc kubenswrapper[5043]: I1125 07:38:28.483546 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" podStartSLOduration=1.6808621590000001 podStartE2EDuration="11.483524104s" podCreationTimestamp="2025-11-25 07:38:17 +0000 UTC" firstStartedPulling="2025-11-25 07:38:18.162654619 +0000 UTC m=+1362.330850330" lastFinishedPulling="2025-11-25 07:38:27.965316534 +0000 UTC m=+1372.133512275" observedRunningTime="2025-11-25 07:38:28.481967661 +0000 UTC m=+1372.650163392" watchObservedRunningTime="2025-11-25 07:38:28.483524104 +0000 UTC m=+1372.651719835" Nov 25 07:38:30 crc kubenswrapper[5043]: I1125 07:38:30.184867 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 07:38:32 crc kubenswrapper[5043]: I1125 07:38:32.749856 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 07:38:40 crc kubenswrapper[5043]: E1125 07:38:40.926795 5043 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1768074_dbe7_4cd0_b646_d4cb304ee5b4.slice/crio-conmon-bc6a1e06b7e52f7c8a8aa67f1debf32b0f1cc3498ea73ecb450eb9e4e84d7bae.scope\": RecentStats: unable to find data in memory cache]" Nov 25 07:38:41 crc kubenswrapper[5043]: I1125 07:38:41.609465 5043 generic.go:334] "Generic (PLEG): container finished" podID="f1768074-dbe7-4cd0-b646-d4cb304ee5b4" containerID="bc6a1e06b7e52f7c8a8aa67f1debf32b0f1cc3498ea73ecb450eb9e4e84d7bae" exitCode=0 Nov 25 07:38:41 crc kubenswrapper[5043]: I1125 07:38:41.609529 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" event={"ID":"f1768074-dbe7-4cd0-b646-d4cb304ee5b4","Type":"ContainerDied","Data":"bc6a1e06b7e52f7c8a8aa67f1debf32b0f1cc3498ea73ecb450eb9e4e84d7bae"} Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.137781 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-stwbd"] Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.154202 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stwbd" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.184445 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-stwbd"] Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.224357 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.233213 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhct2\" (UniqueName: \"kubernetes.io/projected/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-kube-api-access-jhct2\") pod \"redhat-operators-stwbd\" (UID: \"3dadd68e-7e67-4a5e-8b1d-03506b8a982a\") " pod="openshift-marketplace/redhat-operators-stwbd" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.233631 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-utilities\") pod \"redhat-operators-stwbd\" (UID: \"3dadd68e-7e67-4a5e-8b1d-03506b8a982a\") " pod="openshift-marketplace/redhat-operators-stwbd" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.234281 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-catalog-content\") pod \"redhat-operators-stwbd\" (UID: \"3dadd68e-7e67-4a5e-8b1d-03506b8a982a\") " pod="openshift-marketplace/redhat-operators-stwbd" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.335787 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qzqd\" (UniqueName: \"kubernetes.io/projected/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-kube-api-access-9qzqd\") pod \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\" (UID: \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\") " Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.335973 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-repo-setup-combined-ca-bundle\") pod \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\" (UID: \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\") " Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.336140 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-ssh-key\") pod \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\" (UID: \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\") " Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.336270 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-inventory\") pod \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\" (UID: \"f1768074-dbe7-4cd0-b646-d4cb304ee5b4\") " Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.336717 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-utilities\") pod \"redhat-operators-stwbd\" (UID: \"3dadd68e-7e67-4a5e-8b1d-03506b8a982a\") " pod="openshift-marketplace/redhat-operators-stwbd" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.336854 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-catalog-content\") pod \"redhat-operators-stwbd\" (UID: \"3dadd68e-7e67-4a5e-8b1d-03506b8a982a\") " pod="openshift-marketplace/redhat-operators-stwbd" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.336919 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhct2\" (UniqueName: \"kubernetes.io/projected/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-kube-api-access-jhct2\") pod \"redhat-operators-stwbd\" (UID: \"3dadd68e-7e67-4a5e-8b1d-03506b8a982a\") " pod="openshift-marketplace/redhat-operators-stwbd" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.337181 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-utilities\") pod \"redhat-operators-stwbd\" (UID: \"3dadd68e-7e67-4a5e-8b1d-03506b8a982a\") " pod="openshift-marketplace/redhat-operators-stwbd" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.337303 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-catalog-content\") pod \"redhat-operators-stwbd\" (UID: \"3dadd68e-7e67-4a5e-8b1d-03506b8a982a\") " pod="openshift-marketplace/redhat-operators-stwbd" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.356991 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f1768074-dbe7-4cd0-b646-d4cb304ee5b4" (UID: "f1768074-dbe7-4cd0-b646-d4cb304ee5b4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.357118 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-kube-api-access-9qzqd" (OuterVolumeSpecName: "kube-api-access-9qzqd") pod "f1768074-dbe7-4cd0-b646-d4cb304ee5b4" (UID: "f1768074-dbe7-4cd0-b646-d4cb304ee5b4"). InnerVolumeSpecName "kube-api-access-9qzqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.366412 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhct2\" (UniqueName: \"kubernetes.io/projected/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-kube-api-access-jhct2\") pod \"redhat-operators-stwbd\" (UID: \"3dadd68e-7e67-4a5e-8b1d-03506b8a982a\") " pod="openshift-marketplace/redhat-operators-stwbd" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.372233 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-inventory" (OuterVolumeSpecName: "inventory") pod "f1768074-dbe7-4cd0-b646-d4cb304ee5b4" (UID: "f1768074-dbe7-4cd0-b646-d4cb304ee5b4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.376819 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f1768074-dbe7-4cd0-b646-d4cb304ee5b4" (UID: "f1768074-dbe7-4cd0-b646-d4cb304ee5b4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.439489 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qzqd\" (UniqueName: \"kubernetes.io/projected/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-kube-api-access-9qzqd\") on node \"crc\" DevicePath \"\"" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.439838 5043 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.439990 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.440126 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1768074-dbe7-4cd0-b646-d4cb304ee5b4-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.535047 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stwbd" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.650515 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" event={"ID":"f1768074-dbe7-4cd0-b646-d4cb304ee5b4","Type":"ContainerDied","Data":"1f402e9880b16772b9f8fa1cc0ffb4d12a74f3fa571c80e292b8568a77685fad"} Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.650560 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f402e9880b16772b9f8fa1cc0ffb4d12a74f3fa571c80e292b8568a77685fad" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.650676 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.726166 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs"] Nov 25 07:38:43 crc kubenswrapper[5043]: E1125 07:38:43.726774 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1768074-dbe7-4cd0-b646-d4cb304ee5b4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.726787 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1768074-dbe7-4cd0-b646-d4cb304ee5b4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.726955 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1768074-dbe7-4cd0-b646-d4cb304ee5b4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.727527 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.734431 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.734528 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.734569 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.734663 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.737388 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs"] Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.847199 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hphv7\" (UniqueName: \"kubernetes.io/projected/ff9438d8-bf96-477b-8e33-f7031940fff7-kube-api-access-hphv7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs\" (UID: \"ff9438d8-bf96-477b-8e33-f7031940fff7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.847296 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs\" (UID: \"ff9438d8-bf96-477b-8e33-f7031940fff7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.847354 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs\" (UID: \"ff9438d8-bf96-477b-8e33-f7031940fff7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.847519 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs\" (UID: \"ff9438d8-bf96-477b-8e33-f7031940fff7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.949598 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs\" (UID: \"ff9438d8-bf96-477b-8e33-f7031940fff7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.949751 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs\" (UID: \"ff9438d8-bf96-477b-8e33-f7031940fff7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.949802 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs\" (UID: \"ff9438d8-bf96-477b-8e33-f7031940fff7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.950025 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hphv7\" (UniqueName: \"kubernetes.io/projected/ff9438d8-bf96-477b-8e33-f7031940fff7-kube-api-access-hphv7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs\" (UID: \"ff9438d8-bf96-477b-8e33-f7031940fff7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.954041 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs\" (UID: \"ff9438d8-bf96-477b-8e33-f7031940fff7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.954318 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs\" (UID: \"ff9438d8-bf96-477b-8e33-f7031940fff7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.956442 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs\" (UID: \"ff9438d8-bf96-477b-8e33-f7031940fff7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" Nov 25 07:38:43 crc kubenswrapper[5043]: I1125 07:38:43.968869 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hphv7\" (UniqueName: \"kubernetes.io/projected/ff9438d8-bf96-477b-8e33-f7031940fff7-kube-api-access-hphv7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs\" (UID: \"ff9438d8-bf96-477b-8e33-f7031940fff7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" Nov 25 07:38:44 crc kubenswrapper[5043]: I1125 07:38:44.050277 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" Nov 25 07:38:44 crc kubenswrapper[5043]: I1125 07:38:44.085487 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-stwbd"] Nov 25 07:38:44 crc kubenswrapper[5043]: W1125 07:38:44.094554 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dadd68e_7e67_4a5e_8b1d_03506b8a982a.slice/crio-15cab75b06e57219ce0e8ec82332ed93f6bf05cab47ddf8b667f2fb28d7d4ea2 WatchSource:0}: Error finding container 15cab75b06e57219ce0e8ec82332ed93f6bf05cab47ddf8b667f2fb28d7d4ea2: Status 404 returned error can't find the container with id 15cab75b06e57219ce0e8ec82332ed93f6bf05cab47ddf8b667f2fb28d7d4ea2 Nov 25 07:38:44 crc kubenswrapper[5043]: I1125 07:38:44.573924 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs"] Nov 25 07:38:44 crc kubenswrapper[5043]: I1125 07:38:44.658079 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" event={"ID":"ff9438d8-bf96-477b-8e33-f7031940fff7","Type":"ContainerStarted","Data":"09dd84885fd695cdb7f012c5c671011689ffbd215d9231f0c5b8535ac04b400a"} Nov 25 07:38:44 crc kubenswrapper[5043]: I1125 07:38:44.660596 5043 generic.go:334] "Generic (PLEG): container finished" podID="3dadd68e-7e67-4a5e-8b1d-03506b8a982a" containerID="50cfbd923532cbeb90cf0d16d09bba5950a14e9e2c0e50c434d5872cbb8dd382" exitCode=0 Nov 25 07:38:44 crc kubenswrapper[5043]: I1125 07:38:44.660662 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stwbd" event={"ID":"3dadd68e-7e67-4a5e-8b1d-03506b8a982a","Type":"ContainerDied","Data":"50cfbd923532cbeb90cf0d16d09bba5950a14e9e2c0e50c434d5872cbb8dd382"} Nov 25 07:38:44 crc kubenswrapper[5043]: I1125 07:38:44.660707 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stwbd" event={"ID":"3dadd68e-7e67-4a5e-8b1d-03506b8a982a","Type":"ContainerStarted","Data":"15cab75b06e57219ce0e8ec82332ed93f6bf05cab47ddf8b667f2fb28d7d4ea2"} Nov 25 07:38:45 crc kubenswrapper[5043]: I1125 07:38:45.676723 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" event={"ID":"ff9438d8-bf96-477b-8e33-f7031940fff7","Type":"ContainerStarted","Data":"e667625672f069025ce62e7d00149b2a84d3f21c2af88a2c53df2a34408c0dd0"} Nov 25 07:38:45 crc kubenswrapper[5043]: I1125 07:38:45.713694 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" podStartSLOduration=2.282007787 podStartE2EDuration="2.713662182s" podCreationTimestamp="2025-11-25 07:38:43 +0000 UTC" firstStartedPulling="2025-11-25 07:38:44.578305918 +0000 UTC m=+1388.746501639" lastFinishedPulling="2025-11-25 07:38:45.009960303 +0000 UTC m=+1389.178156034" observedRunningTime="2025-11-25 07:38:45.705018668 +0000 UTC m=+1389.873214429" watchObservedRunningTime="2025-11-25 07:38:45.713662182 +0000 UTC m=+1389.881857953" Nov 25 07:38:46 crc kubenswrapper[5043]: I1125 07:38:46.690366 5043 generic.go:334] "Generic (PLEG): container finished" podID="3dadd68e-7e67-4a5e-8b1d-03506b8a982a" containerID="af4002c25bff20f0241f9d6d775f2f68ec9d8eacc194176679df71d38ab80377" exitCode=0 Nov 25 07:38:46 crc kubenswrapper[5043]: I1125 07:38:46.690474 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stwbd" event={"ID":"3dadd68e-7e67-4a5e-8b1d-03506b8a982a","Type":"ContainerDied","Data":"af4002c25bff20f0241f9d6d775f2f68ec9d8eacc194176679df71d38ab80377"} Nov 25 07:38:49 crc kubenswrapper[5043]: I1125 07:38:49.728032 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stwbd" event={"ID":"3dadd68e-7e67-4a5e-8b1d-03506b8a982a","Type":"ContainerStarted","Data":"390fb06db0bc48a934c22ae6c5aa5fedba85c4682f98d9cf9b06a0220eda6d9d"} Nov 25 07:38:49 crc kubenswrapper[5043]: I1125 07:38:49.766400 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-stwbd" podStartSLOduration=1.964309354 podStartE2EDuration="6.766376355s" podCreationTimestamp="2025-11-25 07:38:43 +0000 UTC" firstStartedPulling="2025-11-25 07:38:44.662285402 +0000 UTC m=+1388.830481133" lastFinishedPulling="2025-11-25 07:38:49.464352403 +0000 UTC m=+1393.632548134" observedRunningTime="2025-11-25 07:38:49.753721724 +0000 UTC m=+1393.921917465" watchObservedRunningTime="2025-11-25 07:38:49.766376355 +0000 UTC m=+1393.934572086" Nov 25 07:38:53 crc kubenswrapper[5043]: I1125 07:38:53.535223 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-stwbd" Nov 25 07:38:53 crc kubenswrapper[5043]: I1125 07:38:53.537708 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-stwbd" Nov 25 07:38:54 crc kubenswrapper[5043]: I1125 07:38:54.610368 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-stwbd" podUID="3dadd68e-7e67-4a5e-8b1d-03506b8a982a" containerName="registry-server" probeResult="failure" output=< Nov 25 07:38:54 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 07:38:54 crc kubenswrapper[5043]: > Nov 25 07:39:03 crc kubenswrapper[5043]: I1125 07:39:03.585355 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-stwbd" Nov 25 07:39:03 crc kubenswrapper[5043]: I1125 07:39:03.651277 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-stwbd" Nov 25 07:39:03 crc kubenswrapper[5043]: I1125 07:39:03.835888 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-stwbd"] Nov 25 07:39:04 crc kubenswrapper[5043]: I1125 07:39:04.892804 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-stwbd" podUID="3dadd68e-7e67-4a5e-8b1d-03506b8a982a" containerName="registry-server" containerID="cri-o://390fb06db0bc48a934c22ae6c5aa5fedba85c4682f98d9cf9b06a0220eda6d9d" gracePeriod=2 Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.315518 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stwbd" Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.493962 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-catalog-content\") pod \"3dadd68e-7e67-4a5e-8b1d-03506b8a982a\" (UID: \"3dadd68e-7e67-4a5e-8b1d-03506b8a982a\") " Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.494083 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhct2\" (UniqueName: \"kubernetes.io/projected/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-kube-api-access-jhct2\") pod \"3dadd68e-7e67-4a5e-8b1d-03506b8a982a\" (UID: \"3dadd68e-7e67-4a5e-8b1d-03506b8a982a\") " Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.494128 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-utilities\") pod \"3dadd68e-7e67-4a5e-8b1d-03506b8a982a\" (UID: \"3dadd68e-7e67-4a5e-8b1d-03506b8a982a\") " Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.495109 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-utilities" (OuterVolumeSpecName: "utilities") pod "3dadd68e-7e67-4a5e-8b1d-03506b8a982a" (UID: "3dadd68e-7e67-4a5e-8b1d-03506b8a982a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.499688 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-kube-api-access-jhct2" (OuterVolumeSpecName: "kube-api-access-jhct2") pod "3dadd68e-7e67-4a5e-8b1d-03506b8a982a" (UID: "3dadd68e-7e67-4a5e-8b1d-03506b8a982a"). InnerVolumeSpecName "kube-api-access-jhct2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.589287 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3dadd68e-7e67-4a5e-8b1d-03506b8a982a" (UID: "3dadd68e-7e67-4a5e-8b1d-03506b8a982a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.596667 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.596709 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhct2\" (UniqueName: \"kubernetes.io/projected/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-kube-api-access-jhct2\") on node \"crc\" DevicePath \"\"" Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.596723 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dadd68e-7e67-4a5e-8b1d-03506b8a982a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.916900 5043 generic.go:334] "Generic (PLEG): container finished" podID="3dadd68e-7e67-4a5e-8b1d-03506b8a982a" containerID="390fb06db0bc48a934c22ae6c5aa5fedba85c4682f98d9cf9b06a0220eda6d9d" exitCode=0 Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.916978 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stwbd" event={"ID":"3dadd68e-7e67-4a5e-8b1d-03506b8a982a","Type":"ContainerDied","Data":"390fb06db0bc48a934c22ae6c5aa5fedba85c4682f98d9cf9b06a0220eda6d9d"} Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.917013 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stwbd" event={"ID":"3dadd68e-7e67-4a5e-8b1d-03506b8a982a","Type":"ContainerDied","Data":"15cab75b06e57219ce0e8ec82332ed93f6bf05cab47ddf8b667f2fb28d7d4ea2"} Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.917035 5043 scope.go:117] "RemoveContainer" containerID="390fb06db0bc48a934c22ae6c5aa5fedba85c4682f98d9cf9b06a0220eda6d9d" Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.917206 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stwbd" Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.973487 5043 scope.go:117] "RemoveContainer" containerID="af4002c25bff20f0241f9d6d775f2f68ec9d8eacc194176679df71d38ab80377" Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.974469 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-stwbd"] Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.985954 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-stwbd"] Nov 25 07:39:05 crc kubenswrapper[5043]: I1125 07:39:05.998056 5043 scope.go:117] "RemoveContainer" containerID="50cfbd923532cbeb90cf0d16d09bba5950a14e9e2c0e50c434d5872cbb8dd382" Nov 25 07:39:06 crc kubenswrapper[5043]: I1125 07:39:06.058195 5043 scope.go:117] "RemoveContainer" containerID="390fb06db0bc48a934c22ae6c5aa5fedba85c4682f98d9cf9b06a0220eda6d9d" Nov 25 07:39:06 crc kubenswrapper[5043]: E1125 07:39:06.058829 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"390fb06db0bc48a934c22ae6c5aa5fedba85c4682f98d9cf9b06a0220eda6d9d\": container with ID starting with 390fb06db0bc48a934c22ae6c5aa5fedba85c4682f98d9cf9b06a0220eda6d9d not found: ID does not exist" containerID="390fb06db0bc48a934c22ae6c5aa5fedba85c4682f98d9cf9b06a0220eda6d9d" Nov 25 07:39:06 crc kubenswrapper[5043]: I1125 07:39:06.058888 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"390fb06db0bc48a934c22ae6c5aa5fedba85c4682f98d9cf9b06a0220eda6d9d"} err="failed to get container status \"390fb06db0bc48a934c22ae6c5aa5fedba85c4682f98d9cf9b06a0220eda6d9d\": rpc error: code = NotFound desc = could not find container \"390fb06db0bc48a934c22ae6c5aa5fedba85c4682f98d9cf9b06a0220eda6d9d\": container with ID starting with 390fb06db0bc48a934c22ae6c5aa5fedba85c4682f98d9cf9b06a0220eda6d9d not found: ID does not exist" Nov 25 07:39:06 crc kubenswrapper[5043]: I1125 07:39:06.058926 5043 scope.go:117] "RemoveContainer" containerID="af4002c25bff20f0241f9d6d775f2f68ec9d8eacc194176679df71d38ab80377" Nov 25 07:39:06 crc kubenswrapper[5043]: E1125 07:39:06.059307 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af4002c25bff20f0241f9d6d775f2f68ec9d8eacc194176679df71d38ab80377\": container with ID starting with af4002c25bff20f0241f9d6d775f2f68ec9d8eacc194176679df71d38ab80377 not found: ID does not exist" containerID="af4002c25bff20f0241f9d6d775f2f68ec9d8eacc194176679df71d38ab80377" Nov 25 07:39:06 crc kubenswrapper[5043]: I1125 07:39:06.059343 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af4002c25bff20f0241f9d6d775f2f68ec9d8eacc194176679df71d38ab80377"} err="failed to get container status \"af4002c25bff20f0241f9d6d775f2f68ec9d8eacc194176679df71d38ab80377\": rpc error: code = NotFound desc = could not find container \"af4002c25bff20f0241f9d6d775f2f68ec9d8eacc194176679df71d38ab80377\": container with ID starting with af4002c25bff20f0241f9d6d775f2f68ec9d8eacc194176679df71d38ab80377 not found: ID does not exist" Nov 25 07:39:06 crc kubenswrapper[5043]: I1125 07:39:06.059364 5043 scope.go:117] "RemoveContainer" containerID="50cfbd923532cbeb90cf0d16d09bba5950a14e9e2c0e50c434d5872cbb8dd382" Nov 25 07:39:06 crc kubenswrapper[5043]: E1125 07:39:06.060679 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50cfbd923532cbeb90cf0d16d09bba5950a14e9e2c0e50c434d5872cbb8dd382\": container with ID starting with 50cfbd923532cbeb90cf0d16d09bba5950a14e9e2c0e50c434d5872cbb8dd382 not found: ID does not exist" containerID="50cfbd923532cbeb90cf0d16d09bba5950a14e9e2c0e50c434d5872cbb8dd382" Nov 25 07:39:06 crc kubenswrapper[5043]: I1125 07:39:06.060731 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50cfbd923532cbeb90cf0d16d09bba5950a14e9e2c0e50c434d5872cbb8dd382"} err="failed to get container status \"50cfbd923532cbeb90cf0d16d09bba5950a14e9e2c0e50c434d5872cbb8dd382\": rpc error: code = NotFound desc = could not find container \"50cfbd923532cbeb90cf0d16d09bba5950a14e9e2c0e50c434d5872cbb8dd382\": container with ID starting with 50cfbd923532cbeb90cf0d16d09bba5950a14e9e2c0e50c434d5872cbb8dd382 not found: ID does not exist" Nov 25 07:39:06 crc kubenswrapper[5043]: I1125 07:39:06.976882 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dadd68e-7e67-4a5e-8b1d-03506b8a982a" path="/var/lib/kubelet/pods/3dadd68e-7e67-4a5e-8b1d-03506b8a982a/volumes" Nov 25 07:39:48 crc kubenswrapper[5043]: I1125 07:39:48.674274 5043 scope.go:117] "RemoveContainer" containerID="c4c40f4c8dde9cb0d8fd804208661b84ab8884edfc7b454a6e7e3aa8f428a91f" Nov 25 07:39:48 crc kubenswrapper[5043]: I1125 07:39:48.709869 5043 scope.go:117] "RemoveContainer" containerID="0dd421468fea7a8b8bf89e958972a7c3c728b97bb1544cd27ead479202cfc859" Nov 25 07:39:48 crc kubenswrapper[5043]: I1125 07:39:48.743882 5043 scope.go:117] "RemoveContainer" containerID="c275bdb92ddc262f7c7fa026fd9654fe49a792a9bdc1e8fd1d4cf678dbd59511" Nov 25 07:39:48 crc kubenswrapper[5043]: I1125 07:39:48.794304 5043 scope.go:117] "RemoveContainer" containerID="de734fa7785c5dd7ca0067be0ffadfa0d854720554e95047470229d42e275c7f" Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.015655 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4stdc"] Nov 25 07:40:09 crc kubenswrapper[5043]: E1125 07:40:09.017068 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dadd68e-7e67-4a5e-8b1d-03506b8a982a" containerName="extract-content" Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.017099 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dadd68e-7e67-4a5e-8b1d-03506b8a982a" containerName="extract-content" Nov 25 07:40:09 crc kubenswrapper[5043]: E1125 07:40:09.017152 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dadd68e-7e67-4a5e-8b1d-03506b8a982a" containerName="registry-server" Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.017165 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dadd68e-7e67-4a5e-8b1d-03506b8a982a" containerName="registry-server" Nov 25 07:40:09 crc kubenswrapper[5043]: E1125 07:40:09.017202 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dadd68e-7e67-4a5e-8b1d-03506b8a982a" containerName="extract-utilities" Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.017217 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dadd68e-7e67-4a5e-8b1d-03506b8a982a" containerName="extract-utilities" Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.017542 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dadd68e-7e67-4a5e-8b1d-03506b8a982a" containerName="registry-server" Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.022813 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4stdc" Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.054463 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4stdc"] Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.179668 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-utilities\") pod \"community-operators-4stdc\" (UID: \"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc\") " pod="openshift-marketplace/community-operators-4stdc" Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.179772 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nsfs\" (UniqueName: \"kubernetes.io/projected/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-kube-api-access-8nsfs\") pod \"community-operators-4stdc\" (UID: \"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc\") " pod="openshift-marketplace/community-operators-4stdc" Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.179864 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-catalog-content\") pod \"community-operators-4stdc\" (UID: \"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc\") " pod="openshift-marketplace/community-operators-4stdc" Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.281821 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-utilities\") pod \"community-operators-4stdc\" (UID: \"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc\") " pod="openshift-marketplace/community-operators-4stdc" Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.281912 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nsfs\" (UniqueName: \"kubernetes.io/projected/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-kube-api-access-8nsfs\") pod \"community-operators-4stdc\" (UID: \"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc\") " pod="openshift-marketplace/community-operators-4stdc" Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.281960 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-catalog-content\") pod \"community-operators-4stdc\" (UID: \"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc\") " pod="openshift-marketplace/community-operators-4stdc" Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.282469 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-catalog-content\") pod \"community-operators-4stdc\" (UID: \"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc\") " pod="openshift-marketplace/community-operators-4stdc" Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.282768 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-utilities\") pod \"community-operators-4stdc\" (UID: \"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc\") " pod="openshift-marketplace/community-operators-4stdc" Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.303345 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nsfs\" (UniqueName: \"kubernetes.io/projected/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-kube-api-access-8nsfs\") pod \"community-operators-4stdc\" (UID: \"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc\") " pod="openshift-marketplace/community-operators-4stdc" Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.373498 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4stdc" Nov 25 07:40:09 crc kubenswrapper[5043]: I1125 07:40:09.895790 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4stdc"] Nov 25 07:40:10 crc kubenswrapper[5043]: I1125 07:40:10.578771 5043 generic.go:334] "Generic (PLEG): container finished" podID="af76f1a8-f05d-4a2f-b56b-ed08760bb9bc" containerID="f667978f981f5f54cd3cee40de4cce2eef099ce4784d5f97ae5ff3e899611f58" exitCode=0 Nov 25 07:40:10 crc kubenswrapper[5043]: I1125 07:40:10.578831 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4stdc" event={"ID":"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc","Type":"ContainerDied","Data":"f667978f981f5f54cd3cee40de4cce2eef099ce4784d5f97ae5ff3e899611f58"} Nov 25 07:40:10 crc kubenswrapper[5043]: I1125 07:40:10.579118 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4stdc" event={"ID":"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc","Type":"ContainerStarted","Data":"97527f116a5e54eb331575466aa9898f0781a4cd44e71376b2e06052c51f88ff"} Nov 25 07:40:12 crc kubenswrapper[5043]: I1125 07:40:12.604113 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4stdc" event={"ID":"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc","Type":"ContainerStarted","Data":"3bb41309ebe707d779cd24016d272a160cdb123b23ed95a1f4a7d37a1a752d4e"} Nov 25 07:40:13 crc kubenswrapper[5043]: I1125 07:40:13.622415 5043 generic.go:334] "Generic (PLEG): container finished" podID="af76f1a8-f05d-4a2f-b56b-ed08760bb9bc" containerID="3bb41309ebe707d779cd24016d272a160cdb123b23ed95a1f4a7d37a1a752d4e" exitCode=0 Nov 25 07:40:13 crc kubenswrapper[5043]: I1125 07:40:13.622488 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4stdc" event={"ID":"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc","Type":"ContainerDied","Data":"3bb41309ebe707d779cd24016d272a160cdb123b23ed95a1f4a7d37a1a752d4e"} Nov 25 07:40:15 crc kubenswrapper[5043]: I1125 07:40:15.941459 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4stdc" event={"ID":"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc","Type":"ContainerStarted","Data":"c0a3fd6aaf9cd2dd00e3a1621194478e97b02243c773ab5d931a313337cb243f"} Nov 25 07:40:15 crc kubenswrapper[5043]: I1125 07:40:15.972384 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4stdc" podStartSLOduration=3.6036857810000003 podStartE2EDuration="7.972362181s" podCreationTimestamp="2025-11-25 07:40:08 +0000 UTC" firstStartedPulling="2025-11-25 07:40:10.581735454 +0000 UTC m=+1474.749931215" lastFinishedPulling="2025-11-25 07:40:14.950411894 +0000 UTC m=+1479.118607615" observedRunningTime="2025-11-25 07:40:15.965093795 +0000 UTC m=+1480.133289516" watchObservedRunningTime="2025-11-25 07:40:15.972362181 +0000 UTC m=+1480.140557912" Nov 25 07:40:17 crc kubenswrapper[5043]: I1125 07:40:17.276960 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:40:17 crc kubenswrapper[5043]: I1125 07:40:17.277298 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:40:19 crc kubenswrapper[5043]: I1125 07:40:19.375044 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4stdc" Nov 25 07:40:19 crc kubenswrapper[5043]: I1125 07:40:19.375432 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4stdc" Nov 25 07:40:19 crc kubenswrapper[5043]: I1125 07:40:19.474253 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4stdc" Nov 25 07:40:29 crc kubenswrapper[5043]: I1125 07:40:29.445705 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4stdc" Nov 25 07:40:29 crc kubenswrapper[5043]: I1125 07:40:29.506978 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4stdc"] Nov 25 07:40:30 crc kubenswrapper[5043]: I1125 07:40:30.109016 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4stdc" podUID="af76f1a8-f05d-4a2f-b56b-ed08760bb9bc" containerName="registry-server" containerID="cri-o://c0a3fd6aaf9cd2dd00e3a1621194478e97b02243c773ab5d931a313337cb243f" gracePeriod=2 Nov 25 07:40:30 crc kubenswrapper[5043]: I1125 07:40:30.565075 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4stdc" Nov 25 07:40:30 crc kubenswrapper[5043]: I1125 07:40:30.697833 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-catalog-content\") pod \"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc\" (UID: \"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc\") " Nov 25 07:40:30 crc kubenswrapper[5043]: I1125 07:40:30.697986 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nsfs\" (UniqueName: \"kubernetes.io/projected/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-kube-api-access-8nsfs\") pod \"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc\" (UID: \"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc\") " Nov 25 07:40:30 crc kubenswrapper[5043]: I1125 07:40:30.698762 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-utilities\") pod \"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc\" (UID: \"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc\") " Nov 25 07:40:30 crc kubenswrapper[5043]: I1125 07:40:30.700440 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-utilities" (OuterVolumeSpecName: "utilities") pod "af76f1a8-f05d-4a2f-b56b-ed08760bb9bc" (UID: "af76f1a8-f05d-4a2f-b56b-ed08760bb9bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:40:30 crc kubenswrapper[5043]: I1125 07:40:30.704299 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-kube-api-access-8nsfs" (OuterVolumeSpecName: "kube-api-access-8nsfs") pod "af76f1a8-f05d-4a2f-b56b-ed08760bb9bc" (UID: "af76f1a8-f05d-4a2f-b56b-ed08760bb9bc"). InnerVolumeSpecName "kube-api-access-8nsfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:40:30 crc kubenswrapper[5043]: I1125 07:40:30.773270 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af76f1a8-f05d-4a2f-b56b-ed08760bb9bc" (UID: "af76f1a8-f05d-4a2f-b56b-ed08760bb9bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:40:30 crc kubenswrapper[5043]: I1125 07:40:30.801150 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nsfs\" (UniqueName: \"kubernetes.io/projected/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-kube-api-access-8nsfs\") on node \"crc\" DevicePath \"\"" Nov 25 07:40:30 crc kubenswrapper[5043]: I1125 07:40:30.801190 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:40:30 crc kubenswrapper[5043]: I1125 07:40:30.801203 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:40:31 crc kubenswrapper[5043]: I1125 07:40:31.123480 5043 generic.go:334] "Generic (PLEG): container finished" podID="af76f1a8-f05d-4a2f-b56b-ed08760bb9bc" containerID="c0a3fd6aaf9cd2dd00e3a1621194478e97b02243c773ab5d931a313337cb243f" exitCode=0 Nov 25 07:40:31 crc kubenswrapper[5043]: I1125 07:40:31.123546 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4stdc" event={"ID":"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc","Type":"ContainerDied","Data":"c0a3fd6aaf9cd2dd00e3a1621194478e97b02243c773ab5d931a313337cb243f"} Nov 25 07:40:31 crc kubenswrapper[5043]: I1125 07:40:31.123592 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4stdc" event={"ID":"af76f1a8-f05d-4a2f-b56b-ed08760bb9bc","Type":"ContainerDied","Data":"97527f116a5e54eb331575466aa9898f0781a4cd44e71376b2e06052c51f88ff"} Nov 25 07:40:31 crc kubenswrapper[5043]: I1125 07:40:31.123645 5043 scope.go:117] "RemoveContainer" containerID="c0a3fd6aaf9cd2dd00e3a1621194478e97b02243c773ab5d931a313337cb243f" Nov 25 07:40:31 crc kubenswrapper[5043]: I1125 07:40:31.123595 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4stdc" Nov 25 07:40:31 crc kubenswrapper[5043]: I1125 07:40:31.150758 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4stdc"] Nov 25 07:40:31 crc kubenswrapper[5043]: I1125 07:40:31.158129 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4stdc"] Nov 25 07:40:31 crc kubenswrapper[5043]: I1125 07:40:31.180150 5043 scope.go:117] "RemoveContainer" containerID="3bb41309ebe707d779cd24016d272a160cdb123b23ed95a1f4a7d37a1a752d4e" Nov 25 07:40:31 crc kubenswrapper[5043]: I1125 07:40:31.221508 5043 scope.go:117] "RemoveContainer" containerID="f667978f981f5f54cd3cee40de4cce2eef099ce4784d5f97ae5ff3e899611f58" Nov 25 07:40:31 crc kubenswrapper[5043]: I1125 07:40:31.259590 5043 scope.go:117] "RemoveContainer" containerID="c0a3fd6aaf9cd2dd00e3a1621194478e97b02243c773ab5d931a313337cb243f" Nov 25 07:40:31 crc kubenswrapper[5043]: E1125 07:40:31.260117 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0a3fd6aaf9cd2dd00e3a1621194478e97b02243c773ab5d931a313337cb243f\": container with ID starting with c0a3fd6aaf9cd2dd00e3a1621194478e97b02243c773ab5d931a313337cb243f not found: ID does not exist" containerID="c0a3fd6aaf9cd2dd00e3a1621194478e97b02243c773ab5d931a313337cb243f" Nov 25 07:40:31 crc kubenswrapper[5043]: I1125 07:40:31.260155 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0a3fd6aaf9cd2dd00e3a1621194478e97b02243c773ab5d931a313337cb243f"} err="failed to get container status \"c0a3fd6aaf9cd2dd00e3a1621194478e97b02243c773ab5d931a313337cb243f\": rpc error: code = NotFound desc = could not find container \"c0a3fd6aaf9cd2dd00e3a1621194478e97b02243c773ab5d931a313337cb243f\": container with ID starting with c0a3fd6aaf9cd2dd00e3a1621194478e97b02243c773ab5d931a313337cb243f not found: ID does not exist" Nov 25 07:40:31 crc kubenswrapper[5043]: I1125 07:40:31.260182 5043 scope.go:117] "RemoveContainer" containerID="3bb41309ebe707d779cd24016d272a160cdb123b23ed95a1f4a7d37a1a752d4e" Nov 25 07:40:31 crc kubenswrapper[5043]: E1125 07:40:31.260643 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb41309ebe707d779cd24016d272a160cdb123b23ed95a1f4a7d37a1a752d4e\": container with ID starting with 3bb41309ebe707d779cd24016d272a160cdb123b23ed95a1f4a7d37a1a752d4e not found: ID does not exist" containerID="3bb41309ebe707d779cd24016d272a160cdb123b23ed95a1f4a7d37a1a752d4e" Nov 25 07:40:31 crc kubenswrapper[5043]: I1125 07:40:31.260675 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb41309ebe707d779cd24016d272a160cdb123b23ed95a1f4a7d37a1a752d4e"} err="failed to get container status \"3bb41309ebe707d779cd24016d272a160cdb123b23ed95a1f4a7d37a1a752d4e\": rpc error: code = NotFound desc = could not find container \"3bb41309ebe707d779cd24016d272a160cdb123b23ed95a1f4a7d37a1a752d4e\": container with ID starting with 3bb41309ebe707d779cd24016d272a160cdb123b23ed95a1f4a7d37a1a752d4e not found: ID does not exist" Nov 25 07:40:31 crc kubenswrapper[5043]: I1125 07:40:31.260693 5043 scope.go:117] "RemoveContainer" containerID="f667978f981f5f54cd3cee40de4cce2eef099ce4784d5f97ae5ff3e899611f58" Nov 25 07:40:31 crc kubenswrapper[5043]: E1125 07:40:31.261143 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f667978f981f5f54cd3cee40de4cce2eef099ce4784d5f97ae5ff3e899611f58\": container with ID starting with f667978f981f5f54cd3cee40de4cce2eef099ce4784d5f97ae5ff3e899611f58 not found: ID does not exist" containerID="f667978f981f5f54cd3cee40de4cce2eef099ce4784d5f97ae5ff3e899611f58" Nov 25 07:40:31 crc kubenswrapper[5043]: I1125 07:40:31.261172 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f667978f981f5f54cd3cee40de4cce2eef099ce4784d5f97ae5ff3e899611f58"} err="failed to get container status \"f667978f981f5f54cd3cee40de4cce2eef099ce4784d5f97ae5ff3e899611f58\": rpc error: code = NotFound desc = could not find container \"f667978f981f5f54cd3cee40de4cce2eef099ce4784d5f97ae5ff3e899611f58\": container with ID starting with f667978f981f5f54cd3cee40de4cce2eef099ce4784d5f97ae5ff3e899611f58 not found: ID does not exist" Nov 25 07:40:32 crc kubenswrapper[5043]: I1125 07:40:32.977657 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af76f1a8-f05d-4a2f-b56b-ed08760bb9bc" path="/var/lib/kubelet/pods/af76f1a8-f05d-4a2f-b56b-ed08760bb9bc/volumes" Nov 25 07:40:47 crc kubenswrapper[5043]: I1125 07:40:47.276186 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:40:47 crc kubenswrapper[5043]: I1125 07:40:47.276928 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:40:48 crc kubenswrapper[5043]: I1125 07:40:48.911475 5043 scope.go:117] "RemoveContainer" containerID="d4dacfd9a229030ea71710a5e7f3d92776ec10a07d9c5e4f30afaf694d407647" Nov 25 07:40:48 crc kubenswrapper[5043]: I1125 07:40:48.939475 5043 scope.go:117] "RemoveContainer" containerID="0fc86210600a5ee4c5aa24a4c1a56c2c65771be667f47e6abcf32f6dc5b17553" Nov 25 07:41:08 crc kubenswrapper[5043]: I1125 07:41:08.272002 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bhxsz"] Nov 25 07:41:08 crc kubenswrapper[5043]: E1125 07:41:08.273312 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af76f1a8-f05d-4a2f-b56b-ed08760bb9bc" containerName="extract-utilities" Nov 25 07:41:08 crc kubenswrapper[5043]: I1125 07:41:08.273333 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="af76f1a8-f05d-4a2f-b56b-ed08760bb9bc" containerName="extract-utilities" Nov 25 07:41:08 crc kubenswrapper[5043]: E1125 07:41:08.273365 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af76f1a8-f05d-4a2f-b56b-ed08760bb9bc" containerName="registry-server" Nov 25 07:41:08 crc kubenswrapper[5043]: I1125 07:41:08.273374 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="af76f1a8-f05d-4a2f-b56b-ed08760bb9bc" containerName="registry-server" Nov 25 07:41:08 crc kubenswrapper[5043]: E1125 07:41:08.273391 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af76f1a8-f05d-4a2f-b56b-ed08760bb9bc" containerName="extract-content" Nov 25 07:41:08 crc kubenswrapper[5043]: I1125 07:41:08.273402 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="af76f1a8-f05d-4a2f-b56b-ed08760bb9bc" containerName="extract-content" Nov 25 07:41:08 crc kubenswrapper[5043]: I1125 07:41:08.273723 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="af76f1a8-f05d-4a2f-b56b-ed08760bb9bc" containerName="registry-server" Nov 25 07:41:08 crc kubenswrapper[5043]: I1125 07:41:08.275887 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhxsz" Nov 25 07:41:08 crc kubenswrapper[5043]: I1125 07:41:08.285500 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhxsz"] Nov 25 07:41:08 crc kubenswrapper[5043]: I1125 07:41:08.402788 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvxrz\" (UniqueName: \"kubernetes.io/projected/629b6f60-a9cd-4f31-b826-5136defb59a0-kube-api-access-tvxrz\") pod \"redhat-marketplace-bhxsz\" (UID: \"629b6f60-a9cd-4f31-b826-5136defb59a0\") " pod="openshift-marketplace/redhat-marketplace-bhxsz" Nov 25 07:41:08 crc kubenswrapper[5043]: I1125 07:41:08.402922 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/629b6f60-a9cd-4f31-b826-5136defb59a0-utilities\") pod \"redhat-marketplace-bhxsz\" (UID: \"629b6f60-a9cd-4f31-b826-5136defb59a0\") " pod="openshift-marketplace/redhat-marketplace-bhxsz" Nov 25 07:41:08 crc kubenswrapper[5043]: I1125 07:41:08.403034 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/629b6f60-a9cd-4f31-b826-5136defb59a0-catalog-content\") pod \"redhat-marketplace-bhxsz\" (UID: \"629b6f60-a9cd-4f31-b826-5136defb59a0\") " pod="openshift-marketplace/redhat-marketplace-bhxsz" Nov 25 07:41:08 crc kubenswrapper[5043]: I1125 07:41:08.504521 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/629b6f60-a9cd-4f31-b826-5136defb59a0-catalog-content\") pod \"redhat-marketplace-bhxsz\" (UID: \"629b6f60-a9cd-4f31-b826-5136defb59a0\") " pod="openshift-marketplace/redhat-marketplace-bhxsz" Nov 25 07:41:08 crc kubenswrapper[5043]: I1125 07:41:08.504649 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvxrz\" (UniqueName: \"kubernetes.io/projected/629b6f60-a9cd-4f31-b826-5136defb59a0-kube-api-access-tvxrz\") pod \"redhat-marketplace-bhxsz\" (UID: \"629b6f60-a9cd-4f31-b826-5136defb59a0\") " pod="openshift-marketplace/redhat-marketplace-bhxsz" Nov 25 07:41:08 crc kubenswrapper[5043]: I1125 07:41:08.504755 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/629b6f60-a9cd-4f31-b826-5136defb59a0-utilities\") pod \"redhat-marketplace-bhxsz\" (UID: \"629b6f60-a9cd-4f31-b826-5136defb59a0\") " pod="openshift-marketplace/redhat-marketplace-bhxsz" Nov 25 07:41:08 crc kubenswrapper[5043]: I1125 07:41:08.505122 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/629b6f60-a9cd-4f31-b826-5136defb59a0-catalog-content\") pod \"redhat-marketplace-bhxsz\" (UID: \"629b6f60-a9cd-4f31-b826-5136defb59a0\") " pod="openshift-marketplace/redhat-marketplace-bhxsz" Nov 25 07:41:08 crc kubenswrapper[5043]: I1125 07:41:08.505472 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/629b6f60-a9cd-4f31-b826-5136defb59a0-utilities\") pod \"redhat-marketplace-bhxsz\" (UID: \"629b6f60-a9cd-4f31-b826-5136defb59a0\") " pod="openshift-marketplace/redhat-marketplace-bhxsz" Nov 25 07:41:08 crc kubenswrapper[5043]: I1125 07:41:08.534027 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvxrz\" (UniqueName: \"kubernetes.io/projected/629b6f60-a9cd-4f31-b826-5136defb59a0-kube-api-access-tvxrz\") pod \"redhat-marketplace-bhxsz\" (UID: \"629b6f60-a9cd-4f31-b826-5136defb59a0\") " pod="openshift-marketplace/redhat-marketplace-bhxsz" Nov 25 07:41:08 crc kubenswrapper[5043]: I1125 07:41:08.609980 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhxsz" Nov 25 07:41:09 crc kubenswrapper[5043]: I1125 07:41:09.161969 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhxsz"] Nov 25 07:41:09 crc kubenswrapper[5043]: I1125 07:41:09.523137 5043 generic.go:334] "Generic (PLEG): container finished" podID="629b6f60-a9cd-4f31-b826-5136defb59a0" containerID="8a8183b0dcc2d023ad2321ba2bb5d3a66cf94ab036a4ae983bf8df29b15b72b8" exitCode=0 Nov 25 07:41:09 crc kubenswrapper[5043]: I1125 07:41:09.523193 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhxsz" event={"ID":"629b6f60-a9cd-4f31-b826-5136defb59a0","Type":"ContainerDied","Data":"8a8183b0dcc2d023ad2321ba2bb5d3a66cf94ab036a4ae983bf8df29b15b72b8"} Nov 25 07:41:09 crc kubenswrapper[5043]: I1125 07:41:09.523227 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhxsz" event={"ID":"629b6f60-a9cd-4f31-b826-5136defb59a0","Type":"ContainerStarted","Data":"20d109280067b675e9c37caf9d7f179bb941bcdfabe9b22a9edede8ef7897bf5"} Nov 25 07:41:10 crc kubenswrapper[5043]: I1125 07:41:10.534807 5043 generic.go:334] "Generic (PLEG): container finished" podID="629b6f60-a9cd-4f31-b826-5136defb59a0" containerID="ac14b59aa899060b0ba652aafe1b4f453ca08fe7d57495db44397ffcad5fd99d" exitCode=0 Nov 25 07:41:10 crc kubenswrapper[5043]: I1125 07:41:10.534930 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhxsz" event={"ID":"629b6f60-a9cd-4f31-b826-5136defb59a0","Type":"ContainerDied","Data":"ac14b59aa899060b0ba652aafe1b4f453ca08fe7d57495db44397ffcad5fd99d"} Nov 25 07:41:12 crc kubenswrapper[5043]: I1125 07:41:12.556385 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhxsz" event={"ID":"629b6f60-a9cd-4f31-b826-5136defb59a0","Type":"ContainerStarted","Data":"738ddc8e7abf7eba5a814b4ca1dd050c76bc2137e74eedcfbb607f1b274a8a04"} Nov 25 07:41:12 crc kubenswrapper[5043]: I1125 07:41:12.583025 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bhxsz" podStartSLOduration=1.98552428 podStartE2EDuration="4.583003406s" podCreationTimestamp="2025-11-25 07:41:08 +0000 UTC" firstStartedPulling="2025-11-25 07:41:09.524544634 +0000 UTC m=+1533.692740355" lastFinishedPulling="2025-11-25 07:41:12.12202376 +0000 UTC m=+1536.290219481" observedRunningTime="2025-11-25 07:41:12.574296072 +0000 UTC m=+1536.742491803" watchObservedRunningTime="2025-11-25 07:41:12.583003406 +0000 UTC m=+1536.751199137" Nov 25 07:41:17 crc kubenswrapper[5043]: I1125 07:41:17.276035 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:41:17 crc kubenswrapper[5043]: I1125 07:41:17.276919 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:41:17 crc kubenswrapper[5043]: I1125 07:41:17.277018 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:41:17 crc kubenswrapper[5043]: I1125 07:41:17.278496 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:41:17 crc kubenswrapper[5043]: I1125 07:41:17.278650 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" gracePeriod=600 Nov 25 07:41:17 crc kubenswrapper[5043]: E1125 07:41:17.403532 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:41:17 crc kubenswrapper[5043]: I1125 07:41:17.622869 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" exitCode=0 Nov 25 07:41:17 crc kubenswrapper[5043]: I1125 07:41:17.622969 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474"} Nov 25 07:41:17 crc kubenswrapper[5043]: I1125 07:41:17.623447 5043 scope.go:117] "RemoveContainer" containerID="a39678aadaf4f8799d011d172223bff66847f6049bb09f87a23b01f3ae1af7cd" Nov 25 07:41:17 crc kubenswrapper[5043]: I1125 07:41:17.624186 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:41:17 crc kubenswrapper[5043]: E1125 07:41:17.624658 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:41:18 crc kubenswrapper[5043]: I1125 07:41:18.610309 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bhxsz" Nov 25 07:41:18 crc kubenswrapper[5043]: I1125 07:41:18.610886 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bhxsz" Nov 25 07:41:18 crc kubenswrapper[5043]: I1125 07:41:18.700361 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bhxsz" Nov 25 07:41:18 crc kubenswrapper[5043]: I1125 07:41:18.778509 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bhxsz" Nov 25 07:41:18 crc kubenswrapper[5043]: I1125 07:41:18.953375 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhxsz"] Nov 25 07:41:20 crc kubenswrapper[5043]: I1125 07:41:20.654774 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bhxsz" podUID="629b6f60-a9cd-4f31-b826-5136defb59a0" containerName="registry-server" containerID="cri-o://738ddc8e7abf7eba5a814b4ca1dd050c76bc2137e74eedcfbb607f1b274a8a04" gracePeriod=2 Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.200572 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhxsz" Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.356504 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/629b6f60-a9cd-4f31-b826-5136defb59a0-catalog-content\") pod \"629b6f60-a9cd-4f31-b826-5136defb59a0\" (UID: \"629b6f60-a9cd-4f31-b826-5136defb59a0\") " Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.356653 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/629b6f60-a9cd-4f31-b826-5136defb59a0-utilities\") pod \"629b6f60-a9cd-4f31-b826-5136defb59a0\" (UID: \"629b6f60-a9cd-4f31-b826-5136defb59a0\") " Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.357011 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvxrz\" (UniqueName: \"kubernetes.io/projected/629b6f60-a9cd-4f31-b826-5136defb59a0-kube-api-access-tvxrz\") pod \"629b6f60-a9cd-4f31-b826-5136defb59a0\" (UID: \"629b6f60-a9cd-4f31-b826-5136defb59a0\") " Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.358511 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/629b6f60-a9cd-4f31-b826-5136defb59a0-utilities" (OuterVolumeSpecName: "utilities") pod "629b6f60-a9cd-4f31-b826-5136defb59a0" (UID: "629b6f60-a9cd-4f31-b826-5136defb59a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.369965 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/629b6f60-a9cd-4f31-b826-5136defb59a0-kube-api-access-tvxrz" (OuterVolumeSpecName: "kube-api-access-tvxrz") pod "629b6f60-a9cd-4f31-b826-5136defb59a0" (UID: "629b6f60-a9cd-4f31-b826-5136defb59a0"). InnerVolumeSpecName "kube-api-access-tvxrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.381950 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/629b6f60-a9cd-4f31-b826-5136defb59a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "629b6f60-a9cd-4f31-b826-5136defb59a0" (UID: "629b6f60-a9cd-4f31-b826-5136defb59a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.459939 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvxrz\" (UniqueName: \"kubernetes.io/projected/629b6f60-a9cd-4f31-b826-5136defb59a0-kube-api-access-tvxrz\") on node \"crc\" DevicePath \"\"" Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.460000 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/629b6f60-a9cd-4f31-b826-5136defb59a0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.460020 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/629b6f60-a9cd-4f31-b826-5136defb59a0-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.669510 5043 generic.go:334] "Generic (PLEG): container finished" podID="629b6f60-a9cd-4f31-b826-5136defb59a0" containerID="738ddc8e7abf7eba5a814b4ca1dd050c76bc2137e74eedcfbb607f1b274a8a04" exitCode=0 Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.669558 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhxsz" event={"ID":"629b6f60-a9cd-4f31-b826-5136defb59a0","Type":"ContainerDied","Data":"738ddc8e7abf7eba5a814b4ca1dd050c76bc2137e74eedcfbb607f1b274a8a04"} Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.669593 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhxsz" event={"ID":"629b6f60-a9cd-4f31-b826-5136defb59a0","Type":"ContainerDied","Data":"20d109280067b675e9c37caf9d7f179bb941bcdfabe9b22a9edede8ef7897bf5"} Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.669631 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhxsz" Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.669683 5043 scope.go:117] "RemoveContainer" containerID="738ddc8e7abf7eba5a814b4ca1dd050c76bc2137e74eedcfbb607f1b274a8a04" Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.708073 5043 scope.go:117] "RemoveContainer" containerID="ac14b59aa899060b0ba652aafe1b4f453ca08fe7d57495db44397ffcad5fd99d" Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.722899 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhxsz"] Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.734132 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhxsz"] Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.762254 5043 scope.go:117] "RemoveContainer" containerID="8a8183b0dcc2d023ad2321ba2bb5d3a66cf94ab036a4ae983bf8df29b15b72b8" Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.807200 5043 scope.go:117] "RemoveContainer" containerID="738ddc8e7abf7eba5a814b4ca1dd050c76bc2137e74eedcfbb607f1b274a8a04" Nov 25 07:41:21 crc kubenswrapper[5043]: E1125 07:41:21.808000 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"738ddc8e7abf7eba5a814b4ca1dd050c76bc2137e74eedcfbb607f1b274a8a04\": container with ID starting with 738ddc8e7abf7eba5a814b4ca1dd050c76bc2137e74eedcfbb607f1b274a8a04 not found: ID does not exist" containerID="738ddc8e7abf7eba5a814b4ca1dd050c76bc2137e74eedcfbb607f1b274a8a04" Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.808073 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"738ddc8e7abf7eba5a814b4ca1dd050c76bc2137e74eedcfbb607f1b274a8a04"} err="failed to get container status \"738ddc8e7abf7eba5a814b4ca1dd050c76bc2137e74eedcfbb607f1b274a8a04\": rpc error: code = NotFound desc = could not find container \"738ddc8e7abf7eba5a814b4ca1dd050c76bc2137e74eedcfbb607f1b274a8a04\": container with ID starting with 738ddc8e7abf7eba5a814b4ca1dd050c76bc2137e74eedcfbb607f1b274a8a04 not found: ID does not exist" Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.808115 5043 scope.go:117] "RemoveContainer" containerID="ac14b59aa899060b0ba652aafe1b4f453ca08fe7d57495db44397ffcad5fd99d" Nov 25 07:41:21 crc kubenswrapper[5043]: E1125 07:41:21.808703 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac14b59aa899060b0ba652aafe1b4f453ca08fe7d57495db44397ffcad5fd99d\": container with ID starting with ac14b59aa899060b0ba652aafe1b4f453ca08fe7d57495db44397ffcad5fd99d not found: ID does not exist" containerID="ac14b59aa899060b0ba652aafe1b4f453ca08fe7d57495db44397ffcad5fd99d" Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.808773 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac14b59aa899060b0ba652aafe1b4f453ca08fe7d57495db44397ffcad5fd99d"} err="failed to get container status \"ac14b59aa899060b0ba652aafe1b4f453ca08fe7d57495db44397ffcad5fd99d\": rpc error: code = NotFound desc = could not find container \"ac14b59aa899060b0ba652aafe1b4f453ca08fe7d57495db44397ffcad5fd99d\": container with ID starting with ac14b59aa899060b0ba652aafe1b4f453ca08fe7d57495db44397ffcad5fd99d not found: ID does not exist" Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.808824 5043 scope.go:117] "RemoveContainer" containerID="8a8183b0dcc2d023ad2321ba2bb5d3a66cf94ab036a4ae983bf8df29b15b72b8" Nov 25 07:41:21 crc kubenswrapper[5043]: E1125 07:41:21.809219 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a8183b0dcc2d023ad2321ba2bb5d3a66cf94ab036a4ae983bf8df29b15b72b8\": container with ID starting with 8a8183b0dcc2d023ad2321ba2bb5d3a66cf94ab036a4ae983bf8df29b15b72b8 not found: ID does not exist" containerID="8a8183b0dcc2d023ad2321ba2bb5d3a66cf94ab036a4ae983bf8df29b15b72b8" Nov 25 07:41:21 crc kubenswrapper[5043]: I1125 07:41:21.809270 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a8183b0dcc2d023ad2321ba2bb5d3a66cf94ab036a4ae983bf8df29b15b72b8"} err="failed to get container status \"8a8183b0dcc2d023ad2321ba2bb5d3a66cf94ab036a4ae983bf8df29b15b72b8\": rpc error: code = NotFound desc = could not find container \"8a8183b0dcc2d023ad2321ba2bb5d3a66cf94ab036a4ae983bf8df29b15b72b8\": container with ID starting with 8a8183b0dcc2d023ad2321ba2bb5d3a66cf94ab036a4ae983bf8df29b15b72b8 not found: ID does not exist" Nov 25 07:41:22 crc kubenswrapper[5043]: I1125 07:41:22.976354 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="629b6f60-a9cd-4f31-b826-5136defb59a0" path="/var/lib/kubelet/pods/629b6f60-a9cd-4f31-b826-5136defb59a0/volumes" Nov 25 07:41:31 crc kubenswrapper[5043]: I1125 07:41:31.963578 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:41:31 crc kubenswrapper[5043]: E1125 07:41:31.964729 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:41:42 crc kubenswrapper[5043]: I1125 07:41:42.963184 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:41:42 crc kubenswrapper[5043]: E1125 07:41:42.965148 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:41:57 crc kubenswrapper[5043]: I1125 07:41:57.962413 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:41:57 crc kubenswrapper[5043]: E1125 07:41:57.963088 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:42:12 crc kubenswrapper[5043]: I1125 07:42:12.963654 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:42:12 crc kubenswrapper[5043]: E1125 07:42:12.964402 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:42:24 crc kubenswrapper[5043]: I1125 07:42:24.962862 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:42:24 crc kubenswrapper[5043]: E1125 07:42:24.963698 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:42:27 crc kubenswrapper[5043]: I1125 07:42:27.371884 5043 generic.go:334] "Generic (PLEG): container finished" podID="ff9438d8-bf96-477b-8e33-f7031940fff7" containerID="e667625672f069025ce62e7d00149b2a84d3f21c2af88a2c53df2a34408c0dd0" exitCode=0 Nov 25 07:42:27 crc kubenswrapper[5043]: I1125 07:42:27.372036 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" event={"ID":"ff9438d8-bf96-477b-8e33-f7031940fff7","Type":"ContainerDied","Data":"e667625672f069025ce62e7d00149b2a84d3f21c2af88a2c53df2a34408c0dd0"} Nov 25 07:42:28 crc kubenswrapper[5043]: I1125 07:42:28.771273 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" Nov 25 07:42:28 crc kubenswrapper[5043]: I1125 07:42:28.896764 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-ssh-key\") pod \"ff9438d8-bf96-477b-8e33-f7031940fff7\" (UID: \"ff9438d8-bf96-477b-8e33-f7031940fff7\") " Nov 25 07:42:28 crc kubenswrapper[5043]: I1125 07:42:28.896879 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-inventory\") pod \"ff9438d8-bf96-477b-8e33-f7031940fff7\" (UID: \"ff9438d8-bf96-477b-8e33-f7031940fff7\") " Nov 25 07:42:28 crc kubenswrapper[5043]: I1125 07:42:28.896971 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-bootstrap-combined-ca-bundle\") pod \"ff9438d8-bf96-477b-8e33-f7031940fff7\" (UID: \"ff9438d8-bf96-477b-8e33-f7031940fff7\") " Nov 25 07:42:28 crc kubenswrapper[5043]: I1125 07:42:28.897001 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hphv7\" (UniqueName: \"kubernetes.io/projected/ff9438d8-bf96-477b-8e33-f7031940fff7-kube-api-access-hphv7\") pod \"ff9438d8-bf96-477b-8e33-f7031940fff7\" (UID: \"ff9438d8-bf96-477b-8e33-f7031940fff7\") " Nov 25 07:42:28 crc kubenswrapper[5043]: I1125 07:42:28.902002 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff9438d8-bf96-477b-8e33-f7031940fff7-kube-api-access-hphv7" (OuterVolumeSpecName: "kube-api-access-hphv7") pod "ff9438d8-bf96-477b-8e33-f7031940fff7" (UID: "ff9438d8-bf96-477b-8e33-f7031940fff7"). InnerVolumeSpecName "kube-api-access-hphv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:42:28 crc kubenswrapper[5043]: I1125 07:42:28.902868 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ff9438d8-bf96-477b-8e33-f7031940fff7" (UID: "ff9438d8-bf96-477b-8e33-f7031940fff7"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:42:28 crc kubenswrapper[5043]: I1125 07:42:28.920940 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ff9438d8-bf96-477b-8e33-f7031940fff7" (UID: "ff9438d8-bf96-477b-8e33-f7031940fff7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:42:28 crc kubenswrapper[5043]: I1125 07:42:28.927086 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-inventory" (OuterVolumeSpecName: "inventory") pod "ff9438d8-bf96-477b-8e33-f7031940fff7" (UID: "ff9438d8-bf96-477b-8e33-f7031940fff7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:42:28 crc kubenswrapper[5043]: I1125 07:42:28.999128 5043 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:42:28 crc kubenswrapper[5043]: I1125 07:42:28.999157 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hphv7\" (UniqueName: \"kubernetes.io/projected/ff9438d8-bf96-477b-8e33-f7031940fff7-kube-api-access-hphv7\") on node \"crc\" DevicePath \"\"" Nov 25 07:42:28 crc kubenswrapper[5043]: I1125 07:42:28.999166 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:42:28 crc kubenswrapper[5043]: I1125 07:42:28.999175 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff9438d8-bf96-477b-8e33-f7031940fff7-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.401655 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" event={"ID":"ff9438d8-bf96-477b-8e33-f7031940fff7","Type":"ContainerDied","Data":"09dd84885fd695cdb7f012c5c671011689ffbd215d9231f0c5b8535ac04b400a"} Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.401715 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09dd84885fd695cdb7f012c5c671011689ffbd215d9231f0c5b8535ac04b400a" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.401714 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.471575 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2"] Nov 25 07:42:29 crc kubenswrapper[5043]: E1125 07:42:29.472091 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629b6f60-a9cd-4f31-b826-5136defb59a0" containerName="extract-utilities" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.472110 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="629b6f60-a9cd-4f31-b826-5136defb59a0" containerName="extract-utilities" Nov 25 07:42:29 crc kubenswrapper[5043]: E1125 07:42:29.472126 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9438d8-bf96-477b-8e33-f7031940fff7" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.472134 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9438d8-bf96-477b-8e33-f7031940fff7" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 07:42:29 crc kubenswrapper[5043]: E1125 07:42:29.472148 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629b6f60-a9cd-4f31-b826-5136defb59a0" containerName="registry-server" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.472154 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="629b6f60-a9cd-4f31-b826-5136defb59a0" containerName="registry-server" Nov 25 07:42:29 crc kubenswrapper[5043]: E1125 07:42:29.472174 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629b6f60-a9cd-4f31-b826-5136defb59a0" containerName="extract-content" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.472179 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="629b6f60-a9cd-4f31-b826-5136defb59a0" containerName="extract-content" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.472360 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="629b6f60-a9cd-4f31-b826-5136defb59a0" containerName="registry-server" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.472371 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff9438d8-bf96-477b-8e33-f7031940fff7" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.472947 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.474893 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.475162 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.475869 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.480435 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.488909 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2"] Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.509478 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzvrf\" (UniqueName: \"kubernetes.io/projected/fe182205-f0dc-4ca1-9110-ca21d5e49620-kube-api-access-hzvrf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2\" (UID: \"fe182205-f0dc-4ca1-9110-ca21d5e49620\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.509643 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe182205-f0dc-4ca1-9110-ca21d5e49620-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2\" (UID: \"fe182205-f0dc-4ca1-9110-ca21d5e49620\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.509921 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe182205-f0dc-4ca1-9110-ca21d5e49620-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2\" (UID: \"fe182205-f0dc-4ca1-9110-ca21d5e49620\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.612015 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe182205-f0dc-4ca1-9110-ca21d5e49620-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2\" (UID: \"fe182205-f0dc-4ca1-9110-ca21d5e49620\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.612119 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe182205-f0dc-4ca1-9110-ca21d5e49620-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2\" (UID: \"fe182205-f0dc-4ca1-9110-ca21d5e49620\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.612181 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzvrf\" (UniqueName: \"kubernetes.io/projected/fe182205-f0dc-4ca1-9110-ca21d5e49620-kube-api-access-hzvrf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2\" (UID: \"fe182205-f0dc-4ca1-9110-ca21d5e49620\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.617026 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe182205-f0dc-4ca1-9110-ca21d5e49620-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2\" (UID: \"fe182205-f0dc-4ca1-9110-ca21d5e49620\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.617543 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe182205-f0dc-4ca1-9110-ca21d5e49620-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2\" (UID: \"fe182205-f0dc-4ca1-9110-ca21d5e49620\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.648424 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzvrf\" (UniqueName: \"kubernetes.io/projected/fe182205-f0dc-4ca1-9110-ca21d5e49620-kube-api-access-hzvrf\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2\" (UID: \"fe182205-f0dc-4ca1-9110-ca21d5e49620\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" Nov 25 07:42:29 crc kubenswrapper[5043]: I1125 07:42:29.788173 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" Nov 25 07:42:30 crc kubenswrapper[5043]: I1125 07:42:30.181063 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2"] Nov 25 07:42:30 crc kubenswrapper[5043]: W1125 07:42:30.181885 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe182205_f0dc_4ca1_9110_ca21d5e49620.slice/crio-3099067aa4708788734055635ea4d4df3052f7cdf4e4a2a11f5083634b9f6f79 WatchSource:0}: Error finding container 3099067aa4708788734055635ea4d4df3052f7cdf4e4a2a11f5083634b9f6f79: Status 404 returned error can't find the container with id 3099067aa4708788734055635ea4d4df3052f7cdf4e4a2a11f5083634b9f6f79 Nov 25 07:42:30 crc kubenswrapper[5043]: I1125 07:42:30.186523 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 07:42:30 crc kubenswrapper[5043]: I1125 07:42:30.415553 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" event={"ID":"fe182205-f0dc-4ca1-9110-ca21d5e49620","Type":"ContainerStarted","Data":"3099067aa4708788734055635ea4d4df3052f7cdf4e4a2a11f5083634b9f6f79"} Nov 25 07:42:31 crc kubenswrapper[5043]: I1125 07:42:31.432102 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" event={"ID":"fe182205-f0dc-4ca1-9110-ca21d5e49620","Type":"ContainerStarted","Data":"0b22b8f8531151638a715eb9062387b749352386849c8f1d64c63289145bb41d"} Nov 25 07:42:31 crc kubenswrapper[5043]: I1125 07:42:31.460389 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" podStartSLOduration=2.075694994 podStartE2EDuration="2.460360374s" podCreationTimestamp="2025-11-25 07:42:29 +0000 UTC" firstStartedPulling="2025-11-25 07:42:30.18619882 +0000 UTC m=+1614.354394551" lastFinishedPulling="2025-11-25 07:42:30.57086422 +0000 UTC m=+1614.739059931" observedRunningTime="2025-11-25 07:42:31.457652712 +0000 UTC m=+1615.625848463" watchObservedRunningTime="2025-11-25 07:42:31.460360374 +0000 UTC m=+1615.628556125" Nov 25 07:42:32 crc kubenswrapper[5043]: I1125 07:42:32.049810 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-sstg4"] Nov 25 07:42:32 crc kubenswrapper[5043]: I1125 07:42:32.063036 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1e41-account-create-kbqzd"] Nov 25 07:42:32 crc kubenswrapper[5043]: I1125 07:42:32.084316 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-7hjzt"] Nov 25 07:42:32 crc kubenswrapper[5043]: I1125 07:42:32.093570 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-7hjzt"] Nov 25 07:42:32 crc kubenswrapper[5043]: I1125 07:42:32.114067 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-sstg4"] Nov 25 07:42:32 crc kubenswrapper[5043]: I1125 07:42:32.120592 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1e41-account-create-kbqzd"] Nov 25 07:42:32 crc kubenswrapper[5043]: I1125 07:42:32.977310 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="370681cd-6bb3-47a4-939e-c705ee3814bd" path="/var/lib/kubelet/pods/370681cd-6bb3-47a4-939e-c705ee3814bd/volumes" Nov 25 07:42:32 crc kubenswrapper[5043]: I1125 07:42:32.978083 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf28ec77-4f7c-43a6-9bab-6ff49979b68d" path="/var/lib/kubelet/pods/bf28ec77-4f7c-43a6-9bab-6ff49979b68d/volumes" Nov 25 07:42:32 crc kubenswrapper[5043]: I1125 07:42:32.978687 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d287967c-61b6-4dc4-bb3f-91e576e6d0c7" path="/var/lib/kubelet/pods/d287967c-61b6-4dc4-bb3f-91e576e6d0c7/volumes" Nov 25 07:42:33 crc kubenswrapper[5043]: I1125 07:42:33.051933 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2c5e-account-create-qsdcj"] Nov 25 07:42:33 crc kubenswrapper[5043]: I1125 07:42:33.068275 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d9cc-account-create-2wd58"] Nov 25 07:42:33 crc kubenswrapper[5043]: I1125 07:42:33.079780 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d9cc-account-create-2wd58"] Nov 25 07:42:33 crc kubenswrapper[5043]: I1125 07:42:33.092585 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2c5e-account-create-qsdcj"] Nov 25 07:42:34 crc kubenswrapper[5043]: I1125 07:42:34.038416 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-qghlr"] Nov 25 07:42:34 crc kubenswrapper[5043]: I1125 07:42:34.052031 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-qghlr"] Nov 25 07:42:34 crc kubenswrapper[5043]: I1125 07:42:34.975746 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9" path="/var/lib/kubelet/pods/b2ffa878-ab91-4ab1-bcfd-834dc8ebe5c9/volumes" Nov 25 07:42:34 crc kubenswrapper[5043]: I1125 07:42:34.976533 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7b5fe88-9221-486b-8686-bee5da9fcbf9" path="/var/lib/kubelet/pods/e7b5fe88-9221-486b-8686-bee5da9fcbf9/volumes" Nov 25 07:42:34 crc kubenswrapper[5043]: I1125 07:42:34.977363 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff376c2e-dfad-443a-8ad3-b5a1cd40cd12" path="/var/lib/kubelet/pods/ff376c2e-dfad-443a-8ad3-b5a1cd40cd12/volumes" Nov 25 07:42:36 crc kubenswrapper[5043]: I1125 07:42:36.971593 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:42:36 crc kubenswrapper[5043]: E1125 07:42:36.972346 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:42:49 crc kubenswrapper[5043]: I1125 07:42:49.069769 5043 scope.go:117] "RemoveContainer" containerID="a24e9d5e9d1de1a57b02c78028508d23930ba166376297d367ad797a8a8069f5" Nov 25 07:42:49 crc kubenswrapper[5043]: I1125 07:42:49.123276 5043 scope.go:117] "RemoveContainer" containerID="b850dac5c228560d4c5ba5d1ef5dd96840fff2caebd9886e50d1408a7c0a9fc6" Nov 25 07:42:49 crc kubenswrapper[5043]: I1125 07:42:49.145659 5043 scope.go:117] "RemoveContainer" containerID="5ea72c00f03900258dab3e55d88807719ee15883e1f86a0c006c1d843fe502f4" Nov 25 07:42:49 crc kubenswrapper[5043]: I1125 07:42:49.198260 5043 scope.go:117] "RemoveContainer" containerID="22c7c3385d9878c7706d6d212d89fc74be251621e3195aea0dd427eafc1cbb2f" Nov 25 07:42:49 crc kubenswrapper[5043]: I1125 07:42:49.231645 5043 scope.go:117] "RemoveContainer" containerID="eca358f1661ec4e0fd8e099ecf980430ebfe1ce60a9a7f056ba054739d6aa7dc" Nov 25 07:42:49 crc kubenswrapper[5043]: I1125 07:42:49.269828 5043 scope.go:117] "RemoveContainer" containerID="67579140ac2601af62780d644497d4586aff0d8b69751f84a7a0ba627326f598" Nov 25 07:42:49 crc kubenswrapper[5043]: I1125 07:42:49.290106 5043 scope.go:117] "RemoveContainer" containerID="fb9d13a55233be813f959467f1e38ec06f8ce9a61950e853205103db8c47c718" Nov 25 07:42:49 crc kubenswrapper[5043]: I1125 07:42:49.326313 5043 scope.go:117] "RemoveContainer" containerID="89969ff8c95dbefacf3c62f031a014f7bd98d73f114c23521e79d41c4338d474" Nov 25 07:42:51 crc kubenswrapper[5043]: I1125 07:42:51.962875 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:42:51 crc kubenswrapper[5043]: E1125 07:42:51.963801 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:42:57 crc kubenswrapper[5043]: I1125 07:42:57.031145 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-cn67d"] Nov 25 07:42:57 crc kubenswrapper[5043]: I1125 07:42:57.038048 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-cn67d"] Nov 25 07:42:58 crc kubenswrapper[5043]: I1125 07:42:58.974371 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a15aa1c-a8ff-46d3-9893-a3ea429171b8" path="/var/lib/kubelet/pods/9a15aa1c-a8ff-46d3-9893-a3ea429171b8/volumes" Nov 25 07:42:59 crc kubenswrapper[5043]: I1125 07:42:59.029600 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-szw8b"] Nov 25 07:42:59 crc kubenswrapper[5043]: I1125 07:42:59.037413 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-szw8b"] Nov 25 07:43:00 crc kubenswrapper[5043]: I1125 07:43:00.032532 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-mwtqz"] Nov 25 07:43:00 crc kubenswrapper[5043]: I1125 07:43:00.042528 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-4xbtr"] Nov 25 07:43:00 crc kubenswrapper[5043]: I1125 07:43:00.054772 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-f62d-account-create-24psn"] Nov 25 07:43:00 crc kubenswrapper[5043]: I1125 07:43:00.064974 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58b8-account-create-95g6x"] Nov 25 07:43:00 crc kubenswrapper[5043]: I1125 07:43:00.074673 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9a38-account-create-4ldc5"] Nov 25 07:43:00 crc kubenswrapper[5043]: I1125 07:43:00.082864 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-f62d-account-create-24psn"] Nov 25 07:43:00 crc kubenswrapper[5043]: I1125 07:43:00.090263 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-mwtqz"] Nov 25 07:43:00 crc kubenswrapper[5043]: I1125 07:43:00.096959 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-4xbtr"] Nov 25 07:43:00 crc kubenswrapper[5043]: I1125 07:43:00.103136 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-58b8-account-create-95g6x"] Nov 25 07:43:00 crc kubenswrapper[5043]: I1125 07:43:00.110061 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9a38-account-create-4ldc5"] Nov 25 07:43:00 crc kubenswrapper[5043]: I1125 07:43:00.975775 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27c9d3ac-a1e3-4354-b3b2-31bc32818a60" path="/var/lib/kubelet/pods/27c9d3ac-a1e3-4354-b3b2-31bc32818a60/volumes" Nov 25 07:43:00 crc kubenswrapper[5043]: I1125 07:43:00.976529 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d943615-5ac6-450f-aeec-baa4d0833e9b" path="/var/lib/kubelet/pods/4d943615-5ac6-450f-aeec-baa4d0833e9b/volumes" Nov 25 07:43:00 crc kubenswrapper[5043]: I1125 07:43:00.977379 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b63c108-4aa7-49c8-a12c-51554851c41e" path="/var/lib/kubelet/pods/8b63c108-4aa7-49c8-a12c-51554851c41e/volumes" Nov 25 07:43:00 crc kubenswrapper[5043]: I1125 07:43:00.978148 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2859a00-00bb-4358-a60e-083415c768e1" path="/var/lib/kubelet/pods/a2859a00-00bb-4358-a60e-083415c768e1/volumes" Nov 25 07:43:00 crc kubenswrapper[5043]: I1125 07:43:00.980407 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa69cf56-800b-45ac-8f74-18f393900d61" path="/var/lib/kubelet/pods/aa69cf56-800b-45ac-8f74-18f393900d61/volumes" Nov 25 07:43:00 crc kubenswrapper[5043]: I1125 07:43:00.981746 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda179e9-563d-426d-b4a9-1aca3f47acfe" path="/var/lib/kubelet/pods/eda179e9-563d-426d-b4a9-1aca3f47acfe/volumes" Nov 25 07:43:04 crc kubenswrapper[5043]: I1125 07:43:04.963360 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:43:04 crc kubenswrapper[5043]: E1125 07:43:04.964269 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:43:15 crc kubenswrapper[5043]: I1125 07:43:15.047327 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-cdl2v"] Nov 25 07:43:15 crc kubenswrapper[5043]: I1125 07:43:15.054889 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-cdl2v"] Nov 25 07:43:16 crc kubenswrapper[5043]: I1125 07:43:16.973943 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eb045c8-8071-479e-ae53-287767fb69b9" path="/var/lib/kubelet/pods/1eb045c8-8071-479e-ae53-287767fb69b9/volumes" Nov 25 07:43:19 crc kubenswrapper[5043]: I1125 07:43:19.963817 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:43:19 crc kubenswrapper[5043]: E1125 07:43:19.964544 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:43:33 crc kubenswrapper[5043]: I1125 07:43:33.963745 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:43:33 crc kubenswrapper[5043]: E1125 07:43:33.964872 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:43:45 crc kubenswrapper[5043]: I1125 07:43:45.962658 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:43:45 crc kubenswrapper[5043]: E1125 07:43:45.963564 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:43:49 crc kubenswrapper[5043]: I1125 07:43:49.484837 5043 scope.go:117] "RemoveContainer" containerID="4cb15547b8709d3294837362c16e6b4594c441118d6ecaf76b064edda9e78b45" Nov 25 07:43:49 crc kubenswrapper[5043]: I1125 07:43:49.523732 5043 scope.go:117] "RemoveContainer" containerID="aa79346d8afebd8bca6a1dcf82cccc5641b4ce2f03f8c33ffa3a7c1d45f1e55b" Nov 25 07:43:49 crc kubenswrapper[5043]: I1125 07:43:49.591474 5043 scope.go:117] "RemoveContainer" containerID="c31d71c70f6037f883d0758f93b18ef8949b3c68648ef1a295aad9aea02f2467" Nov 25 07:43:49 crc kubenswrapper[5043]: I1125 07:43:49.634967 5043 scope.go:117] "RemoveContainer" containerID="0c4d45c85703f31b14930296b4c5a163f5c9e3feffd780f0535dcbc54a113c2d" Nov 25 07:43:49 crc kubenswrapper[5043]: I1125 07:43:49.685881 5043 scope.go:117] "RemoveContainer" containerID="5a8600776776123457e72ccfed961a2b07316700ad3ff95c800322249dc7ef78" Nov 25 07:43:49 crc kubenswrapper[5043]: I1125 07:43:49.722540 5043 scope.go:117] "RemoveContainer" containerID="161d74bdd037f2ba4bba94bafd8f7d94e2ce7189ed535577f904e7f347e9a7e9" Nov 25 07:43:49 crc kubenswrapper[5043]: I1125 07:43:49.777887 5043 scope.go:117] "RemoveContainer" containerID="675e623c76b689689c591b0c7fba3f6b32642b63e1bb62af9ec60c7ad4289a9c" Nov 25 07:43:49 crc kubenswrapper[5043]: I1125 07:43:49.826872 5043 scope.go:117] "RemoveContainer" containerID="22ebb668532db10b70a217bedb9a1ee4f24cd9cc62b4cc04a2fa73e511198be5" Nov 25 07:43:51 crc kubenswrapper[5043]: I1125 07:43:51.319167 5043 generic.go:334] "Generic (PLEG): container finished" podID="fe182205-f0dc-4ca1-9110-ca21d5e49620" containerID="0b22b8f8531151638a715eb9062387b749352386849c8f1d64c63289145bb41d" exitCode=0 Nov 25 07:43:51 crc kubenswrapper[5043]: I1125 07:43:51.319230 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" event={"ID":"fe182205-f0dc-4ca1-9110-ca21d5e49620","Type":"ContainerDied","Data":"0b22b8f8531151638a715eb9062387b749352386849c8f1d64c63289145bb41d"} Nov 25 07:43:52 crc kubenswrapper[5043]: I1125 07:43:52.711299 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" Nov 25 07:43:52 crc kubenswrapper[5043]: I1125 07:43:52.845454 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe182205-f0dc-4ca1-9110-ca21d5e49620-inventory\") pod \"fe182205-f0dc-4ca1-9110-ca21d5e49620\" (UID: \"fe182205-f0dc-4ca1-9110-ca21d5e49620\") " Nov 25 07:43:52 crc kubenswrapper[5043]: I1125 07:43:52.845534 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe182205-f0dc-4ca1-9110-ca21d5e49620-ssh-key\") pod \"fe182205-f0dc-4ca1-9110-ca21d5e49620\" (UID: \"fe182205-f0dc-4ca1-9110-ca21d5e49620\") " Nov 25 07:43:52 crc kubenswrapper[5043]: I1125 07:43:52.845702 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzvrf\" (UniqueName: \"kubernetes.io/projected/fe182205-f0dc-4ca1-9110-ca21d5e49620-kube-api-access-hzvrf\") pod \"fe182205-f0dc-4ca1-9110-ca21d5e49620\" (UID: \"fe182205-f0dc-4ca1-9110-ca21d5e49620\") " Nov 25 07:43:52 crc kubenswrapper[5043]: I1125 07:43:52.851317 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe182205-f0dc-4ca1-9110-ca21d5e49620-kube-api-access-hzvrf" (OuterVolumeSpecName: "kube-api-access-hzvrf") pod "fe182205-f0dc-4ca1-9110-ca21d5e49620" (UID: "fe182205-f0dc-4ca1-9110-ca21d5e49620"). InnerVolumeSpecName "kube-api-access-hzvrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:43:52 crc kubenswrapper[5043]: I1125 07:43:52.874463 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe182205-f0dc-4ca1-9110-ca21d5e49620-inventory" (OuterVolumeSpecName: "inventory") pod "fe182205-f0dc-4ca1-9110-ca21d5e49620" (UID: "fe182205-f0dc-4ca1-9110-ca21d5e49620"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:43:52 crc kubenswrapper[5043]: I1125 07:43:52.879654 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe182205-f0dc-4ca1-9110-ca21d5e49620-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fe182205-f0dc-4ca1-9110-ca21d5e49620" (UID: "fe182205-f0dc-4ca1-9110-ca21d5e49620"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:43:52 crc kubenswrapper[5043]: I1125 07:43:52.947559 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe182205-f0dc-4ca1-9110-ca21d5e49620-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:43:52 crc kubenswrapper[5043]: I1125 07:43:52.947586 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fe182205-f0dc-4ca1-9110-ca21d5e49620-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:43:52 crc kubenswrapper[5043]: I1125 07:43:52.947601 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzvrf\" (UniqueName: \"kubernetes.io/projected/fe182205-f0dc-4ca1-9110-ca21d5e49620-kube-api-access-hzvrf\") on node \"crc\" DevicePath \"\"" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.344364 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" event={"ID":"fe182205-f0dc-4ca1-9110-ca21d5e49620","Type":"ContainerDied","Data":"3099067aa4708788734055635ea4d4df3052f7cdf4e4a2a11f5083634b9f6f79"} Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.344404 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.344422 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3099067aa4708788734055635ea4d4df3052f7cdf4e4a2a11f5083634b9f6f79" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.474194 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm"] Nov 25 07:43:53 crc kubenswrapper[5043]: E1125 07:43:53.475195 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe182205-f0dc-4ca1-9110-ca21d5e49620" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.475520 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe182205-f0dc-4ca1-9110-ca21d5e49620" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.476191 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe182205-f0dc-4ca1-9110-ca21d5e49620" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.477237 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.480061 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.480273 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.480447 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.480569 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.495412 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm"] Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.669324 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phxsb\" (UniqueName: \"kubernetes.io/projected/4edb7473-0040-4944-aff9-fb0a7588d84f-kube-api-access-phxsb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm\" (UID: \"4edb7473-0040-4944-aff9-fb0a7588d84f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.669523 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4edb7473-0040-4944-aff9-fb0a7588d84f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm\" (UID: \"4edb7473-0040-4944-aff9-fb0a7588d84f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.669673 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4edb7473-0040-4944-aff9-fb0a7588d84f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm\" (UID: \"4edb7473-0040-4944-aff9-fb0a7588d84f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.771885 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phxsb\" (UniqueName: \"kubernetes.io/projected/4edb7473-0040-4944-aff9-fb0a7588d84f-kube-api-access-phxsb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm\" (UID: \"4edb7473-0040-4944-aff9-fb0a7588d84f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.772210 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4edb7473-0040-4944-aff9-fb0a7588d84f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm\" (UID: \"4edb7473-0040-4944-aff9-fb0a7588d84f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.772321 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4edb7473-0040-4944-aff9-fb0a7588d84f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm\" (UID: \"4edb7473-0040-4944-aff9-fb0a7588d84f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.782366 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4edb7473-0040-4944-aff9-fb0a7588d84f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm\" (UID: \"4edb7473-0040-4944-aff9-fb0a7588d84f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.782718 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4edb7473-0040-4944-aff9-fb0a7588d84f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm\" (UID: \"4edb7473-0040-4944-aff9-fb0a7588d84f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.789995 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phxsb\" (UniqueName: \"kubernetes.io/projected/4edb7473-0040-4944-aff9-fb0a7588d84f-kube-api-access-phxsb\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm\" (UID: \"4edb7473-0040-4944-aff9-fb0a7588d84f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" Nov 25 07:43:53 crc kubenswrapper[5043]: I1125 07:43:53.806740 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" Nov 25 07:43:54 crc kubenswrapper[5043]: I1125 07:43:54.331019 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm"] Nov 25 07:43:54 crc kubenswrapper[5043]: W1125 07:43:54.338308 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4edb7473_0040_4944_aff9_fb0a7588d84f.slice/crio-3f0d25db2f459f379d3c5c86b8e7e4dfe5b91363c87448295b9463851341f5bf WatchSource:0}: Error finding container 3f0d25db2f459f379d3c5c86b8e7e4dfe5b91363c87448295b9463851341f5bf: Status 404 returned error can't find the container with id 3f0d25db2f459f379d3c5c86b8e7e4dfe5b91363c87448295b9463851341f5bf Nov 25 07:43:54 crc kubenswrapper[5043]: I1125 07:43:54.353705 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" event={"ID":"4edb7473-0040-4944-aff9-fb0a7588d84f","Type":"ContainerStarted","Data":"3f0d25db2f459f379d3c5c86b8e7e4dfe5b91363c87448295b9463851341f5bf"} Nov 25 07:43:55 crc kubenswrapper[5043]: I1125 07:43:55.364162 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" event={"ID":"4edb7473-0040-4944-aff9-fb0a7588d84f","Type":"ContainerStarted","Data":"f78f09763dfa815ed6cc59ab6882de2bf91e0f2df8f2b50b03d2c4a9ac7b1fb1"} Nov 25 07:43:55 crc kubenswrapper[5043]: I1125 07:43:55.402054 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" podStartSLOduration=1.8921817459999999 podStartE2EDuration="2.402024913s" podCreationTimestamp="2025-11-25 07:43:53 +0000 UTC" firstStartedPulling="2025-11-25 07:43:54.340300321 +0000 UTC m=+1698.508496042" lastFinishedPulling="2025-11-25 07:43:54.850143478 +0000 UTC m=+1699.018339209" observedRunningTime="2025-11-25 07:43:55.384538272 +0000 UTC m=+1699.552734073" watchObservedRunningTime="2025-11-25 07:43:55.402024913 +0000 UTC m=+1699.570220674" Nov 25 07:44:00 crc kubenswrapper[5043]: I1125 07:44:00.414596 5043 generic.go:334] "Generic (PLEG): container finished" podID="4edb7473-0040-4944-aff9-fb0a7588d84f" containerID="f78f09763dfa815ed6cc59ab6882de2bf91e0f2df8f2b50b03d2c4a9ac7b1fb1" exitCode=0 Nov 25 07:44:00 crc kubenswrapper[5043]: I1125 07:44:00.414770 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" event={"ID":"4edb7473-0040-4944-aff9-fb0a7588d84f","Type":"ContainerDied","Data":"f78f09763dfa815ed6cc59ab6882de2bf91e0f2df8f2b50b03d2c4a9ac7b1fb1"} Nov 25 07:44:00 crc kubenswrapper[5043]: I1125 07:44:00.963399 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:44:00 crc kubenswrapper[5043]: E1125 07:44:00.964032 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:44:01 crc kubenswrapper[5043]: I1125 07:44:01.043975 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-zvlrd"] Nov 25 07:44:01 crc kubenswrapper[5043]: I1125 07:44:01.053282 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-2zt29"] Nov 25 07:44:01 crc kubenswrapper[5043]: I1125 07:44:01.061993 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-zvlrd"] Nov 25 07:44:01 crc kubenswrapper[5043]: I1125 07:44:01.068277 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-2zt29"] Nov 25 07:44:01 crc kubenswrapper[5043]: I1125 07:44:01.825212 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" Nov 25 07:44:01 crc kubenswrapper[5043]: I1125 07:44:01.933533 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4edb7473-0040-4944-aff9-fb0a7588d84f-ssh-key\") pod \"4edb7473-0040-4944-aff9-fb0a7588d84f\" (UID: \"4edb7473-0040-4944-aff9-fb0a7588d84f\") " Nov 25 07:44:01 crc kubenswrapper[5043]: I1125 07:44:01.933661 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phxsb\" (UniqueName: \"kubernetes.io/projected/4edb7473-0040-4944-aff9-fb0a7588d84f-kube-api-access-phxsb\") pod \"4edb7473-0040-4944-aff9-fb0a7588d84f\" (UID: \"4edb7473-0040-4944-aff9-fb0a7588d84f\") " Nov 25 07:44:01 crc kubenswrapper[5043]: I1125 07:44:01.933867 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4edb7473-0040-4944-aff9-fb0a7588d84f-inventory\") pod \"4edb7473-0040-4944-aff9-fb0a7588d84f\" (UID: \"4edb7473-0040-4944-aff9-fb0a7588d84f\") " Nov 25 07:44:01 crc kubenswrapper[5043]: I1125 07:44:01.939860 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4edb7473-0040-4944-aff9-fb0a7588d84f-kube-api-access-phxsb" (OuterVolumeSpecName: "kube-api-access-phxsb") pod "4edb7473-0040-4944-aff9-fb0a7588d84f" (UID: "4edb7473-0040-4944-aff9-fb0a7588d84f"). InnerVolumeSpecName "kube-api-access-phxsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:44:01 crc kubenswrapper[5043]: I1125 07:44:01.968770 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4edb7473-0040-4944-aff9-fb0a7588d84f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4edb7473-0040-4944-aff9-fb0a7588d84f" (UID: "4edb7473-0040-4944-aff9-fb0a7588d84f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:44:01 crc kubenswrapper[5043]: I1125 07:44:01.983998 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4edb7473-0040-4944-aff9-fb0a7588d84f-inventory" (OuterVolumeSpecName: "inventory") pod "4edb7473-0040-4944-aff9-fb0a7588d84f" (UID: "4edb7473-0040-4944-aff9-fb0a7588d84f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.036410 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4edb7473-0040-4944-aff9-fb0a7588d84f-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.036444 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4edb7473-0040-4944-aff9-fb0a7588d84f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.036456 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phxsb\" (UniqueName: \"kubernetes.io/projected/4edb7473-0040-4944-aff9-fb0a7588d84f-kube-api-access-phxsb\") on node \"crc\" DevicePath \"\"" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.061986 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pcdfx"] Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.072103 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pcdfx"] Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.453203 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" event={"ID":"4edb7473-0040-4944-aff9-fb0a7588d84f","Type":"ContainerDied","Data":"3f0d25db2f459f379d3c5c86b8e7e4dfe5b91363c87448295b9463851341f5bf"} Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.453249 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f0d25db2f459f379d3c5c86b8e7e4dfe5b91363c87448295b9463851341f5bf" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.453262 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.507597 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7"] Nov 25 07:44:02 crc kubenswrapper[5043]: E1125 07:44:02.508046 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4edb7473-0040-4944-aff9-fb0a7588d84f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.508069 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4edb7473-0040-4944-aff9-fb0a7588d84f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.508280 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4edb7473-0040-4944-aff9-fb0a7588d84f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.509000 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.513406 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.514095 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.514212 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.515788 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.522414 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7"] Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.647089 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqn7m\" (UniqueName: \"kubernetes.io/projected/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-kube-api-access-gqn7m\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f95x7\" (UID: \"87449fe9-18ed-4139-aab1-d0c7c0afa5b0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.647142 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f95x7\" (UID: \"87449fe9-18ed-4139-aab1-d0c7c0afa5b0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.647197 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f95x7\" (UID: \"87449fe9-18ed-4139-aab1-d0c7c0afa5b0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.768291 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqn7m\" (UniqueName: \"kubernetes.io/projected/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-kube-api-access-gqn7m\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f95x7\" (UID: \"87449fe9-18ed-4139-aab1-d0c7c0afa5b0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.768385 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f95x7\" (UID: \"87449fe9-18ed-4139-aab1-d0c7c0afa5b0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.768461 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f95x7\" (UID: \"87449fe9-18ed-4139-aab1-d0c7c0afa5b0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.779567 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f95x7\" (UID: \"87449fe9-18ed-4139-aab1-d0c7c0afa5b0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.783796 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f95x7\" (UID: \"87449fe9-18ed-4139-aab1-d0c7c0afa5b0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.786399 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqn7m\" (UniqueName: \"kubernetes.io/projected/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-kube-api-access-gqn7m\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f95x7\" (UID: \"87449fe9-18ed-4139-aab1-d0c7c0afa5b0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.829222 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.975192 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423" path="/var/lib/kubelet/pods/5ecb6236-f0c1-4042-ad6e-4bcd6c5ab423/volumes" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.976122 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66bda068-47b7-46f6-a75e-97dd76293fe9" path="/var/lib/kubelet/pods/66bda068-47b7-46f6-a75e-97dd76293fe9/volumes" Nov 25 07:44:02 crc kubenswrapper[5043]: I1125 07:44:02.976721 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9a33b7c-0771-42ee-b50d-abb6120f7fba" path="/var/lib/kubelet/pods/e9a33b7c-0771-42ee-b50d-abb6120f7fba/volumes" Nov 25 07:44:03 crc kubenswrapper[5043]: I1125 07:44:03.362827 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7"] Nov 25 07:44:03 crc kubenswrapper[5043]: I1125 07:44:03.463179 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" event={"ID":"87449fe9-18ed-4139-aab1-d0c7c0afa5b0","Type":"ContainerStarted","Data":"ee173103e7ef6184d62d847eccd53dded1fca17fbfb9019d070c3f396d0f9cfb"} Nov 25 07:44:04 crc kubenswrapper[5043]: I1125 07:44:04.476234 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" event={"ID":"87449fe9-18ed-4139-aab1-d0c7c0afa5b0","Type":"ContainerStarted","Data":"56ef6b16e39df5346115fc43376eaf2c8010f4b20311e6011cc3571ec68a40e2"} Nov 25 07:44:04 crc kubenswrapper[5043]: I1125 07:44:04.499148 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" podStartSLOduration=1.7771918530000002 podStartE2EDuration="2.499129717s" podCreationTimestamp="2025-11-25 07:44:02 +0000 UTC" firstStartedPulling="2025-11-25 07:44:03.37948884 +0000 UTC m=+1707.547684561" lastFinishedPulling="2025-11-25 07:44:04.101426664 +0000 UTC m=+1708.269622425" observedRunningTime="2025-11-25 07:44:04.497002759 +0000 UTC m=+1708.665198480" watchObservedRunningTime="2025-11-25 07:44:04.499129717 +0000 UTC m=+1708.667325448" Nov 25 07:44:11 crc kubenswrapper[5043]: I1125 07:44:11.058547 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-tnm6z"] Nov 25 07:44:11 crc kubenswrapper[5043]: I1125 07:44:11.070990 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-tnm6z"] Nov 25 07:44:12 crc kubenswrapper[5043]: I1125 07:44:12.981932 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0360da29-fc4a-44ea-9d0e-e446d69037bc" path="/var/lib/kubelet/pods/0360da29-fc4a-44ea-9d0e-e446d69037bc/volumes" Nov 25 07:44:15 crc kubenswrapper[5043]: I1125 07:44:15.963286 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:44:15 crc kubenswrapper[5043]: E1125 07:44:15.964104 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:44:24 crc kubenswrapper[5043]: I1125 07:44:24.038767 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-jzr78"] Nov 25 07:44:24 crc kubenswrapper[5043]: I1125 07:44:24.053178 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-jzr78"] Nov 25 07:44:24 crc kubenswrapper[5043]: I1125 07:44:24.974410 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2de64291-b46f-4ba3-bdec-a3bad5873881" path="/var/lib/kubelet/pods/2de64291-b46f-4ba3-bdec-a3bad5873881/volumes" Nov 25 07:44:27 crc kubenswrapper[5043]: I1125 07:44:27.962105 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:44:27 crc kubenswrapper[5043]: E1125 07:44:27.962694 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:44:42 crc kubenswrapper[5043]: I1125 07:44:42.964227 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:44:42 crc kubenswrapper[5043]: E1125 07:44:42.965651 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:44:48 crc kubenswrapper[5043]: I1125 07:44:48.931407 5043 generic.go:334] "Generic (PLEG): container finished" podID="87449fe9-18ed-4139-aab1-d0c7c0afa5b0" containerID="56ef6b16e39df5346115fc43376eaf2c8010f4b20311e6011cc3571ec68a40e2" exitCode=0 Nov 25 07:44:48 crc kubenswrapper[5043]: I1125 07:44:48.931513 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" event={"ID":"87449fe9-18ed-4139-aab1-d0c7c0afa5b0","Type":"ContainerDied","Data":"56ef6b16e39df5346115fc43376eaf2c8010f4b20311e6011cc3571ec68a40e2"} Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.053126 5043 scope.go:117] "RemoveContainer" containerID="ea4387a89fee869b2d4faf2ae8f63191ac27bf811d1a6ac017be83593f0221d4" Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.100542 5043 scope.go:117] "RemoveContainer" containerID="c3f7028217a5618f8744e16c74ebc4b7c1011405e151cb03269b944e2b5a5dbc" Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.160120 5043 scope.go:117] "RemoveContainer" containerID="75f2592db3c5a8441facc75e0de58769459996f5d7f711900a2a1957128adaa9" Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.214483 5043 scope.go:117] "RemoveContainer" containerID="c42c8bed9b1a3aed9759f3c09cca6adbf2b372034087590476c5f0f374ae8722" Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.262841 5043 scope.go:117] "RemoveContainer" containerID="49a8893f812fa5273ec4b7368978d5da18eacfb331f5a18927d9d709a7ebc952" Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.338426 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.497146 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-ssh-key\") pod \"87449fe9-18ed-4139-aab1-d0c7c0afa5b0\" (UID: \"87449fe9-18ed-4139-aab1-d0c7c0afa5b0\") " Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.497299 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-inventory\") pod \"87449fe9-18ed-4139-aab1-d0c7c0afa5b0\" (UID: \"87449fe9-18ed-4139-aab1-d0c7c0afa5b0\") " Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.497360 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqn7m\" (UniqueName: \"kubernetes.io/projected/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-kube-api-access-gqn7m\") pod \"87449fe9-18ed-4139-aab1-d0c7c0afa5b0\" (UID: \"87449fe9-18ed-4139-aab1-d0c7c0afa5b0\") " Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.503568 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-kube-api-access-gqn7m" (OuterVolumeSpecName: "kube-api-access-gqn7m") pod "87449fe9-18ed-4139-aab1-d0c7c0afa5b0" (UID: "87449fe9-18ed-4139-aab1-d0c7c0afa5b0"). InnerVolumeSpecName "kube-api-access-gqn7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.521916 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "87449fe9-18ed-4139-aab1-d0c7c0afa5b0" (UID: "87449fe9-18ed-4139-aab1-d0c7c0afa5b0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.540523 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-inventory" (OuterVolumeSpecName: "inventory") pod "87449fe9-18ed-4139-aab1-d0c7c0afa5b0" (UID: "87449fe9-18ed-4139-aab1-d0c7c0afa5b0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.599792 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.599838 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.599858 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqn7m\" (UniqueName: \"kubernetes.io/projected/87449fe9-18ed-4139-aab1-d0c7c0afa5b0-kube-api-access-gqn7m\") on node \"crc\" DevicePath \"\"" Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.958230 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" event={"ID":"87449fe9-18ed-4139-aab1-d0c7c0afa5b0","Type":"ContainerDied","Data":"ee173103e7ef6184d62d847eccd53dded1fca17fbfb9019d070c3f396d0f9cfb"} Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.958273 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7" Nov 25 07:44:50 crc kubenswrapper[5043]: I1125 07:44:50.958274 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee173103e7ef6184d62d847eccd53dded1fca17fbfb9019d070c3f396d0f9cfb" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.061804 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z"] Nov 25 07:44:51 crc kubenswrapper[5043]: E1125 07:44:51.062406 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87449fe9-18ed-4139-aab1-d0c7c0afa5b0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.062514 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="87449fe9-18ed-4139-aab1-d0c7c0afa5b0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.063328 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="87449fe9-18ed-4139-aab1-d0c7c0afa5b0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.064356 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.067422 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.067946 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.067999 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.069660 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.084775 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z"] Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.211437 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a36c998-6b38-47b9-954d-3fd54b9bbecb-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z\" (UID: \"4a36c998-6b38-47b9-954d-3fd54b9bbecb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.211554 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a36c998-6b38-47b9-954d-3fd54b9bbecb-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z\" (UID: \"4a36c998-6b38-47b9-954d-3fd54b9bbecb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.212164 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84l6s\" (UniqueName: \"kubernetes.io/projected/4a36c998-6b38-47b9-954d-3fd54b9bbecb-kube-api-access-84l6s\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z\" (UID: \"4a36c998-6b38-47b9-954d-3fd54b9bbecb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.314598 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84l6s\" (UniqueName: \"kubernetes.io/projected/4a36c998-6b38-47b9-954d-3fd54b9bbecb-kube-api-access-84l6s\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z\" (UID: \"4a36c998-6b38-47b9-954d-3fd54b9bbecb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.314731 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a36c998-6b38-47b9-954d-3fd54b9bbecb-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z\" (UID: \"4a36c998-6b38-47b9-954d-3fd54b9bbecb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.314780 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a36c998-6b38-47b9-954d-3fd54b9bbecb-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z\" (UID: \"4a36c998-6b38-47b9-954d-3fd54b9bbecb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.319783 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a36c998-6b38-47b9-954d-3fd54b9bbecb-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z\" (UID: \"4a36c998-6b38-47b9-954d-3fd54b9bbecb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.321269 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a36c998-6b38-47b9-954d-3fd54b9bbecb-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z\" (UID: \"4a36c998-6b38-47b9-954d-3fd54b9bbecb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.343585 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84l6s\" (UniqueName: \"kubernetes.io/projected/4a36c998-6b38-47b9-954d-3fd54b9bbecb-kube-api-access-84l6s\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z\" (UID: \"4a36c998-6b38-47b9-954d-3fd54b9bbecb\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" Nov 25 07:44:51 crc kubenswrapper[5043]: I1125 07:44:51.444651 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" Nov 25 07:44:52 crc kubenswrapper[5043]: I1125 07:44:52.063755 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-df70-account-create-l928f"] Nov 25 07:44:52 crc kubenswrapper[5043]: I1125 07:44:52.099400 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9a3a-account-create-s7vqr"] Nov 25 07:44:52 crc kubenswrapper[5043]: I1125 07:44:52.109067 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-dpcmk"] Nov 25 07:44:52 crc kubenswrapper[5043]: I1125 07:44:52.118662 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-dbef-account-create-mt2f4"] Nov 25 07:44:52 crc kubenswrapper[5043]: W1125 07:44:52.125053 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a36c998_6b38_47b9_954d_3fd54b9bbecb.slice/crio-1b4c62cbae5a5074569fcd13d3a9ce9a708f37998c2ffa7f4ac70319fccd38b9 WatchSource:0}: Error finding container 1b4c62cbae5a5074569fcd13d3a9ce9a708f37998c2ffa7f4ac70319fccd38b9: Status 404 returned error can't find the container with id 1b4c62cbae5a5074569fcd13d3a9ce9a708f37998c2ffa7f4ac70319fccd38b9 Nov 25 07:44:52 crc kubenswrapper[5043]: I1125 07:44:52.125473 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-dpcmk"] Nov 25 07:44:52 crc kubenswrapper[5043]: I1125 07:44:52.131323 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-dbef-account-create-mt2f4"] Nov 25 07:44:52 crc kubenswrapper[5043]: I1125 07:44:52.137102 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-df70-account-create-l928f"] Nov 25 07:44:52 crc kubenswrapper[5043]: I1125 07:44:52.143341 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9a3a-account-create-s7vqr"] Nov 25 07:44:52 crc kubenswrapper[5043]: I1125 07:44:52.150435 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z"] Nov 25 07:44:52 crc kubenswrapper[5043]: I1125 07:44:52.975755 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cbed283-5e56-4d9b-b749-ab8b4808834e" path="/var/lib/kubelet/pods/4cbed283-5e56-4d9b-b749-ab8b4808834e/volumes" Nov 25 07:44:52 crc kubenswrapper[5043]: I1125 07:44:52.977456 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56abad80-cda8-4d71-a70f-4761414adb87" path="/var/lib/kubelet/pods/56abad80-cda8-4d71-a70f-4761414adb87/volumes" Nov 25 07:44:52 crc kubenswrapper[5043]: I1125 07:44:52.978619 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c8ab01-08e3-4b78-a215-c1382e53c98f" path="/var/lib/kubelet/pods/93c8ab01-08e3-4b78-a215-c1382e53c98f/volumes" Nov 25 07:44:52 crc kubenswrapper[5043]: I1125 07:44:52.979413 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b82277a9-e2eb-45db-a8ac-ae5cd7f162d3" path="/var/lib/kubelet/pods/b82277a9-e2eb-45db-a8ac-ae5cd7f162d3/volumes" Nov 25 07:44:52 crc kubenswrapper[5043]: I1125 07:44:52.984787 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" event={"ID":"4a36c998-6b38-47b9-954d-3fd54b9bbecb","Type":"ContainerStarted","Data":"25d7d8e91374fd22998207cb0665713bee8cccbc6a575fe7d0cda99aebe0a169"} Nov 25 07:44:52 crc kubenswrapper[5043]: I1125 07:44:52.985032 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" event={"ID":"4a36c998-6b38-47b9-954d-3fd54b9bbecb","Type":"ContainerStarted","Data":"1b4c62cbae5a5074569fcd13d3a9ce9a708f37998c2ffa7f4ac70319fccd38b9"} Nov 25 07:44:53 crc kubenswrapper[5043]: I1125 07:44:53.006875 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" podStartSLOduration=1.465838569 podStartE2EDuration="2.006854307s" podCreationTimestamp="2025-11-25 07:44:51 +0000 UTC" firstStartedPulling="2025-11-25 07:44:52.127126419 +0000 UTC m=+1756.295322140" lastFinishedPulling="2025-11-25 07:44:52.668142157 +0000 UTC m=+1756.836337878" observedRunningTime="2025-11-25 07:44:53.00584721 +0000 UTC m=+1757.174042971" watchObservedRunningTime="2025-11-25 07:44:53.006854307 +0000 UTC m=+1757.175050028" Nov 25 07:44:53 crc kubenswrapper[5043]: I1125 07:44:53.031458 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-ngdf8"] Nov 25 07:44:53 crc kubenswrapper[5043]: I1125 07:44:53.042789 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-pf25q"] Nov 25 07:44:53 crc kubenswrapper[5043]: I1125 07:44:53.049513 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-ngdf8"] Nov 25 07:44:53 crc kubenswrapper[5043]: I1125 07:44:53.056892 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-pf25q"] Nov 25 07:44:53 crc kubenswrapper[5043]: I1125 07:44:53.962558 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:44:53 crc kubenswrapper[5043]: E1125 07:44:53.963206 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:44:54 crc kubenswrapper[5043]: I1125 07:44:54.973589 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3801a63c-ffa6-49c8-8bf9-2bafdad18466" path="/var/lib/kubelet/pods/3801a63c-ffa6-49c8-8bf9-2bafdad18466/volumes" Nov 25 07:44:54 crc kubenswrapper[5043]: I1125 07:44:54.974694 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9c4413b-cd74-4ad4-978b-089edd47d7b3" path="/var/lib/kubelet/pods/e9c4413b-cd74-4ad4-978b-089edd47d7b3/volumes" Nov 25 07:44:58 crc kubenswrapper[5043]: I1125 07:44:58.034291 5043 generic.go:334] "Generic (PLEG): container finished" podID="4a36c998-6b38-47b9-954d-3fd54b9bbecb" containerID="25d7d8e91374fd22998207cb0665713bee8cccbc6a575fe7d0cda99aebe0a169" exitCode=0 Nov 25 07:44:58 crc kubenswrapper[5043]: I1125 07:44:58.034368 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" event={"ID":"4a36c998-6b38-47b9-954d-3fd54b9bbecb","Type":"ContainerDied","Data":"25d7d8e91374fd22998207cb0665713bee8cccbc6a575fe7d0cda99aebe0a169"} Nov 25 07:44:59 crc kubenswrapper[5043]: I1125 07:44:59.503471 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" Nov 25 07:44:59 crc kubenswrapper[5043]: I1125 07:44:59.602733 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a36c998-6b38-47b9-954d-3fd54b9bbecb-inventory\") pod \"4a36c998-6b38-47b9-954d-3fd54b9bbecb\" (UID: \"4a36c998-6b38-47b9-954d-3fd54b9bbecb\") " Nov 25 07:44:59 crc kubenswrapper[5043]: I1125 07:44:59.602946 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a36c998-6b38-47b9-954d-3fd54b9bbecb-ssh-key\") pod \"4a36c998-6b38-47b9-954d-3fd54b9bbecb\" (UID: \"4a36c998-6b38-47b9-954d-3fd54b9bbecb\") " Nov 25 07:44:59 crc kubenswrapper[5043]: I1125 07:44:59.603012 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84l6s\" (UniqueName: \"kubernetes.io/projected/4a36c998-6b38-47b9-954d-3fd54b9bbecb-kube-api-access-84l6s\") pod \"4a36c998-6b38-47b9-954d-3fd54b9bbecb\" (UID: \"4a36c998-6b38-47b9-954d-3fd54b9bbecb\") " Nov 25 07:44:59 crc kubenswrapper[5043]: I1125 07:44:59.625795 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a36c998-6b38-47b9-954d-3fd54b9bbecb-kube-api-access-84l6s" (OuterVolumeSpecName: "kube-api-access-84l6s") pod "4a36c998-6b38-47b9-954d-3fd54b9bbecb" (UID: "4a36c998-6b38-47b9-954d-3fd54b9bbecb"). InnerVolumeSpecName "kube-api-access-84l6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:44:59 crc kubenswrapper[5043]: I1125 07:44:59.632688 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a36c998-6b38-47b9-954d-3fd54b9bbecb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4a36c998-6b38-47b9-954d-3fd54b9bbecb" (UID: "4a36c998-6b38-47b9-954d-3fd54b9bbecb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:44:59 crc kubenswrapper[5043]: I1125 07:44:59.647709 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a36c998-6b38-47b9-954d-3fd54b9bbecb-inventory" (OuterVolumeSpecName: "inventory") pod "4a36c998-6b38-47b9-954d-3fd54b9bbecb" (UID: "4a36c998-6b38-47b9-954d-3fd54b9bbecb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:44:59 crc kubenswrapper[5043]: I1125 07:44:59.705015 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a36c998-6b38-47b9-954d-3fd54b9bbecb-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:44:59 crc kubenswrapper[5043]: I1125 07:44:59.705049 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84l6s\" (UniqueName: \"kubernetes.io/projected/4a36c998-6b38-47b9-954d-3fd54b9bbecb-kube-api-access-84l6s\") on node \"crc\" DevicePath \"\"" Nov 25 07:44:59 crc kubenswrapper[5043]: I1125 07:44:59.705062 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a36c998-6b38-47b9-954d-3fd54b9bbecb-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.053734 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" event={"ID":"4a36c998-6b38-47b9-954d-3fd54b9bbecb","Type":"ContainerDied","Data":"1b4c62cbae5a5074569fcd13d3a9ce9a708f37998c2ffa7f4ac70319fccd38b9"} Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.053767 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b4c62cbae5a5074569fcd13d3a9ce9a708f37998c2ffa7f4ac70319fccd38b9" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.054063 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.139660 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk"] Nov 25 07:45:00 crc kubenswrapper[5043]: E1125 07:45:00.140121 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a36c998-6b38-47b9-954d-3fd54b9bbecb" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.140148 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a36c998-6b38-47b9-954d-3fd54b9bbecb" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.140414 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a36c998-6b38-47b9-954d-3fd54b9bbecb" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.141184 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.143917 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.144134 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.153200 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99"] Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.154899 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.157354 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.157815 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.159918 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.165237 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.174746 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99"] Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.188492 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk"] Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.220003 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06731370-602e-4cbf-acaa-ea2b0f758443-secret-volume\") pod \"collect-profiles-29400945-vj2rk\" (UID: \"06731370-602e-4cbf-acaa-ea2b0f758443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.220084 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06731370-602e-4cbf-acaa-ea2b0f758443-config-volume\") pod \"collect-profiles-29400945-vj2rk\" (UID: \"06731370-602e-4cbf-acaa-ea2b0f758443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.220124 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6g2l\" (UniqueName: \"kubernetes.io/projected/06731370-602e-4cbf-acaa-ea2b0f758443-kube-api-access-j6g2l\") pod \"collect-profiles-29400945-vj2rk\" (UID: \"06731370-602e-4cbf-acaa-ea2b0f758443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.322586 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4e7968b-0535-4ef0-990e-55872b901287-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6zd99\" (UID: \"d4e7968b-0535-4ef0-990e-55872b901287\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.322683 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lptvw\" (UniqueName: \"kubernetes.io/projected/d4e7968b-0535-4ef0-990e-55872b901287-kube-api-access-lptvw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6zd99\" (UID: \"d4e7968b-0535-4ef0-990e-55872b901287\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.322807 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06731370-602e-4cbf-acaa-ea2b0f758443-secret-volume\") pod \"collect-profiles-29400945-vj2rk\" (UID: \"06731370-602e-4cbf-acaa-ea2b0f758443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.323038 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06731370-602e-4cbf-acaa-ea2b0f758443-config-volume\") pod \"collect-profiles-29400945-vj2rk\" (UID: \"06731370-602e-4cbf-acaa-ea2b0f758443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.323129 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4e7968b-0535-4ef0-990e-55872b901287-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6zd99\" (UID: \"d4e7968b-0535-4ef0-990e-55872b901287\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.323159 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6g2l\" (UniqueName: \"kubernetes.io/projected/06731370-602e-4cbf-acaa-ea2b0f758443-kube-api-access-j6g2l\") pod \"collect-profiles-29400945-vj2rk\" (UID: \"06731370-602e-4cbf-acaa-ea2b0f758443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.324632 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06731370-602e-4cbf-acaa-ea2b0f758443-config-volume\") pod \"collect-profiles-29400945-vj2rk\" (UID: \"06731370-602e-4cbf-acaa-ea2b0f758443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.328500 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06731370-602e-4cbf-acaa-ea2b0f758443-secret-volume\") pod \"collect-profiles-29400945-vj2rk\" (UID: \"06731370-602e-4cbf-acaa-ea2b0f758443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.338087 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6g2l\" (UniqueName: \"kubernetes.io/projected/06731370-602e-4cbf-acaa-ea2b0f758443-kube-api-access-j6g2l\") pod \"collect-profiles-29400945-vj2rk\" (UID: \"06731370-602e-4cbf-acaa-ea2b0f758443\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.425352 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4e7968b-0535-4ef0-990e-55872b901287-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6zd99\" (UID: \"d4e7968b-0535-4ef0-990e-55872b901287\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.425391 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lptvw\" (UniqueName: \"kubernetes.io/projected/d4e7968b-0535-4ef0-990e-55872b901287-kube-api-access-lptvw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6zd99\" (UID: \"d4e7968b-0535-4ef0-990e-55872b901287\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.425515 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4e7968b-0535-4ef0-990e-55872b901287-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6zd99\" (UID: \"d4e7968b-0535-4ef0-990e-55872b901287\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.429309 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4e7968b-0535-4ef0-990e-55872b901287-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6zd99\" (UID: \"d4e7968b-0535-4ef0-990e-55872b901287\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.432719 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4e7968b-0535-4ef0-990e-55872b901287-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6zd99\" (UID: \"d4e7968b-0535-4ef0-990e-55872b901287\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.445791 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lptvw\" (UniqueName: \"kubernetes.io/projected/d4e7968b-0535-4ef0-990e-55872b901287-kube-api-access-lptvw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6zd99\" (UID: \"d4e7968b-0535-4ef0-990e-55872b901287\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.457637 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.469720 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" Nov 25 07:45:00 crc kubenswrapper[5043]: I1125 07:45:00.916973 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk"] Nov 25 07:45:01 crc kubenswrapper[5043]: I1125 07:45:01.056149 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99"] Nov 25 07:45:01 crc kubenswrapper[5043]: I1125 07:45:01.061418 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk" event={"ID":"06731370-602e-4cbf-acaa-ea2b0f758443","Type":"ContainerStarted","Data":"d6cd9cbed53ee08811c135f5920afe6fa2faa385f0b8a4dc9016e7ffd24781f0"} Nov 25 07:45:01 crc kubenswrapper[5043]: W1125 07:45:01.312987 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4e7968b_0535_4ef0_990e_55872b901287.slice/crio-9a471280486c0e48437e24a569463d08f7e30ea69b58061eb4a49e5797d7a691 WatchSource:0}: Error finding container 9a471280486c0e48437e24a569463d08f7e30ea69b58061eb4a49e5797d7a691: Status 404 returned error can't find the container with id 9a471280486c0e48437e24a569463d08f7e30ea69b58061eb4a49e5797d7a691 Nov 25 07:45:02 crc kubenswrapper[5043]: I1125 07:45:02.077963 5043 generic.go:334] "Generic (PLEG): container finished" podID="06731370-602e-4cbf-acaa-ea2b0f758443" containerID="ffc30b9cde6292de7dda31d8ad43cf4345dbb445b433617c12fb76c451f28a56" exitCode=0 Nov 25 07:45:02 crc kubenswrapper[5043]: I1125 07:45:02.078176 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk" event={"ID":"06731370-602e-4cbf-acaa-ea2b0f758443","Type":"ContainerDied","Data":"ffc30b9cde6292de7dda31d8ad43cf4345dbb445b433617c12fb76c451f28a56"} Nov 25 07:45:02 crc kubenswrapper[5043]: I1125 07:45:02.081141 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" event={"ID":"d4e7968b-0535-4ef0-990e-55872b901287","Type":"ContainerStarted","Data":"9a471280486c0e48437e24a569463d08f7e30ea69b58061eb4a49e5797d7a691"} Nov 25 07:45:03 crc kubenswrapper[5043]: I1125 07:45:03.405591 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk" Nov 25 07:45:03 crc kubenswrapper[5043]: I1125 07:45:03.587563 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6g2l\" (UniqueName: \"kubernetes.io/projected/06731370-602e-4cbf-acaa-ea2b0f758443-kube-api-access-j6g2l\") pod \"06731370-602e-4cbf-acaa-ea2b0f758443\" (UID: \"06731370-602e-4cbf-acaa-ea2b0f758443\") " Nov 25 07:45:03 crc kubenswrapper[5043]: I1125 07:45:03.587685 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06731370-602e-4cbf-acaa-ea2b0f758443-secret-volume\") pod \"06731370-602e-4cbf-acaa-ea2b0f758443\" (UID: \"06731370-602e-4cbf-acaa-ea2b0f758443\") " Nov 25 07:45:03 crc kubenswrapper[5043]: I1125 07:45:03.588082 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06731370-602e-4cbf-acaa-ea2b0f758443-config-volume\") pod \"06731370-602e-4cbf-acaa-ea2b0f758443\" (UID: \"06731370-602e-4cbf-acaa-ea2b0f758443\") " Nov 25 07:45:03 crc kubenswrapper[5043]: I1125 07:45:03.588910 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06731370-602e-4cbf-acaa-ea2b0f758443-config-volume" (OuterVolumeSpecName: "config-volume") pod "06731370-602e-4cbf-acaa-ea2b0f758443" (UID: "06731370-602e-4cbf-acaa-ea2b0f758443"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 07:45:03 crc kubenswrapper[5043]: I1125 07:45:03.593869 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06731370-602e-4cbf-acaa-ea2b0f758443-kube-api-access-j6g2l" (OuterVolumeSpecName: "kube-api-access-j6g2l") pod "06731370-602e-4cbf-acaa-ea2b0f758443" (UID: "06731370-602e-4cbf-acaa-ea2b0f758443"). InnerVolumeSpecName "kube-api-access-j6g2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:45:03 crc kubenswrapper[5043]: I1125 07:45:03.595351 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06731370-602e-4cbf-acaa-ea2b0f758443-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "06731370-602e-4cbf-acaa-ea2b0f758443" (UID: "06731370-602e-4cbf-acaa-ea2b0f758443"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:45:03 crc kubenswrapper[5043]: I1125 07:45:03.691288 5043 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06731370-602e-4cbf-acaa-ea2b0f758443-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 07:45:03 crc kubenswrapper[5043]: I1125 07:45:03.691333 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6g2l\" (UniqueName: \"kubernetes.io/projected/06731370-602e-4cbf-acaa-ea2b0f758443-kube-api-access-j6g2l\") on node \"crc\" DevicePath \"\"" Nov 25 07:45:03 crc kubenswrapper[5043]: I1125 07:45:03.691347 5043 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06731370-602e-4cbf-acaa-ea2b0f758443-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 07:45:04 crc kubenswrapper[5043]: I1125 07:45:04.122038 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" event={"ID":"d4e7968b-0535-4ef0-990e-55872b901287","Type":"ContainerStarted","Data":"bc4918983ee0267ed600f573a9ea3430b7069c86cef3cc741a6ae0ebd8ea0334"} Nov 25 07:45:04 crc kubenswrapper[5043]: I1125 07:45:04.125013 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk" event={"ID":"06731370-602e-4cbf-acaa-ea2b0f758443","Type":"ContainerDied","Data":"d6cd9cbed53ee08811c135f5920afe6fa2faa385f0b8a4dc9016e7ffd24781f0"} Nov 25 07:45:04 crc kubenswrapper[5043]: I1125 07:45:04.125059 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6cd9cbed53ee08811c135f5920afe6fa2faa385f0b8a4dc9016e7ffd24781f0" Nov 25 07:45:04 crc kubenswrapper[5043]: I1125 07:45:04.125137 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk" Nov 25 07:45:04 crc kubenswrapper[5043]: I1125 07:45:04.149547 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" podStartSLOduration=1.821041985 podStartE2EDuration="4.149529631s" podCreationTimestamp="2025-11-25 07:45:00 +0000 UTC" firstStartedPulling="2025-11-25 07:45:01.315701392 +0000 UTC m=+1765.483897153" lastFinishedPulling="2025-11-25 07:45:03.644189048 +0000 UTC m=+1767.812384799" observedRunningTime="2025-11-25 07:45:04.145022422 +0000 UTC m=+1768.313218143" watchObservedRunningTime="2025-11-25 07:45:04.149529631 +0000 UTC m=+1768.317725362" Nov 25 07:45:08 crc kubenswrapper[5043]: I1125 07:45:08.962847 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:45:08 crc kubenswrapper[5043]: E1125 07:45:08.963462 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:45:22 crc kubenswrapper[5043]: I1125 07:45:22.967722 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:45:22 crc kubenswrapper[5043]: E1125 07:45:22.968737 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:45:23 crc kubenswrapper[5043]: I1125 07:45:23.039483 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2sd2w"] Nov 25 07:45:23 crc kubenswrapper[5043]: I1125 07:45:23.047496 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2sd2w"] Nov 25 07:45:24 crc kubenswrapper[5043]: I1125 07:45:24.981259 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c561c664-fb47-4c58-971f-b32fe1256a9f" path="/var/lib/kubelet/pods/c561c664-fb47-4c58-971f-b32fe1256a9f/volumes" Nov 25 07:45:33 crc kubenswrapper[5043]: I1125 07:45:33.963724 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:45:33 crc kubenswrapper[5043]: E1125 07:45:33.964751 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:45:41 crc kubenswrapper[5043]: I1125 07:45:41.097670 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-45pwf"] Nov 25 07:45:41 crc kubenswrapper[5043]: I1125 07:45:41.113893 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-45pwf"] Nov 25 07:45:42 crc kubenswrapper[5043]: I1125 07:45:42.980963 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b74b2ac-677f-4271-9aea-ffc23321eb55" path="/var/lib/kubelet/pods/2b74b2ac-677f-4271-9aea-ffc23321eb55/volumes" Nov 25 07:45:43 crc kubenswrapper[5043]: I1125 07:45:43.035306 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c9vc6"] Nov 25 07:45:43 crc kubenswrapper[5043]: I1125 07:45:43.044737 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-c9vc6"] Nov 25 07:45:44 crc kubenswrapper[5043]: I1125 07:45:44.973316 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e" path="/var/lib/kubelet/pods/e9709c6c-0872-4b0d-94a0-adb5a1b1cf9e/volumes" Nov 25 07:45:48 crc kubenswrapper[5043]: I1125 07:45:48.963356 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:45:48 crc kubenswrapper[5043]: E1125 07:45:48.964387 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:45:50 crc kubenswrapper[5043]: I1125 07:45:50.412315 5043 scope.go:117] "RemoveContainer" containerID="c8dd027dacbede12f0f851b3f658feeb93cff2a13aa7821f021c4cc7efe77275" Nov 25 07:45:50 crc kubenswrapper[5043]: I1125 07:45:50.451763 5043 scope.go:117] "RemoveContainer" containerID="f3667d462c3b265f14592ee818bc569926a74ca2090000adad64c17fcd589ae4" Nov 25 07:45:50 crc kubenswrapper[5043]: I1125 07:45:50.516438 5043 scope.go:117] "RemoveContainer" containerID="d376bfc7f206cb3842ae665d04879f95d43a848cf9eebe10d45ad2d3caf86429" Nov 25 07:45:50 crc kubenswrapper[5043]: I1125 07:45:50.563477 5043 scope.go:117] "RemoveContainer" containerID="a69c4df5031c9410fb0833af1d4c23d5669a596b406c0a36fbbfeb145d1f03d6" Nov 25 07:45:50 crc kubenswrapper[5043]: I1125 07:45:50.603478 5043 scope.go:117] "RemoveContainer" containerID="f8066527fc9b56bbb69970e0491953a70eb49cfde7cdfffb8d3de0ff43b5e940" Nov 25 07:45:50 crc kubenswrapper[5043]: I1125 07:45:50.663968 5043 scope.go:117] "RemoveContainer" containerID="05c38c091b0b299533db2b2b9e2e604db165bfb1228d71e18ec5adc8b5bf51d0" Nov 25 07:45:50 crc kubenswrapper[5043]: I1125 07:45:50.685558 5043 scope.go:117] "RemoveContainer" containerID="d06a2a8462a68b14d8d667e4af9abc7103e2ae3e008a1461e1588e60dfd635cf" Nov 25 07:45:50 crc kubenswrapper[5043]: I1125 07:45:50.726555 5043 scope.go:117] "RemoveContainer" containerID="68bb4309c19c6454334f889c2400fd1b750050ea652ed311178005778246272d" Nov 25 07:45:50 crc kubenswrapper[5043]: I1125 07:45:50.777364 5043 scope.go:117] "RemoveContainer" containerID="78ce028904d91977d0b862a2d14a1e594f3cc2c5c321f535c8aa761cfa9a50b5" Nov 25 07:45:58 crc kubenswrapper[5043]: I1125 07:45:58.995760 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ws6vd"] Nov 25 07:45:58 crc kubenswrapper[5043]: E1125 07:45:58.997354 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06731370-602e-4cbf-acaa-ea2b0f758443" containerName="collect-profiles" Nov 25 07:45:58 crc kubenswrapper[5043]: I1125 07:45:58.997379 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="06731370-602e-4cbf-acaa-ea2b0f758443" containerName="collect-profiles" Nov 25 07:45:58 crc kubenswrapper[5043]: I1125 07:45:58.997811 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="06731370-602e-4cbf-acaa-ea2b0f758443" containerName="collect-profiles" Nov 25 07:45:59 crc kubenswrapper[5043]: I1125 07:45:59.001122 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ws6vd" Nov 25 07:45:59 crc kubenswrapper[5043]: I1125 07:45:59.005395 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ws6vd"] Nov 25 07:45:59 crc kubenswrapper[5043]: I1125 07:45:59.099758 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/392572ea-c2d0-4e58-a206-1755d171539a-catalog-content\") pod \"certified-operators-ws6vd\" (UID: \"392572ea-c2d0-4e58-a206-1755d171539a\") " pod="openshift-marketplace/certified-operators-ws6vd" Nov 25 07:45:59 crc kubenswrapper[5043]: I1125 07:45:59.099874 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrfq2\" (UniqueName: \"kubernetes.io/projected/392572ea-c2d0-4e58-a206-1755d171539a-kube-api-access-zrfq2\") pod \"certified-operators-ws6vd\" (UID: \"392572ea-c2d0-4e58-a206-1755d171539a\") " pod="openshift-marketplace/certified-operators-ws6vd" Nov 25 07:45:59 crc kubenswrapper[5043]: I1125 07:45:59.099941 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/392572ea-c2d0-4e58-a206-1755d171539a-utilities\") pod \"certified-operators-ws6vd\" (UID: \"392572ea-c2d0-4e58-a206-1755d171539a\") " pod="openshift-marketplace/certified-operators-ws6vd" Nov 25 07:45:59 crc kubenswrapper[5043]: I1125 07:45:59.201793 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/392572ea-c2d0-4e58-a206-1755d171539a-catalog-content\") pod \"certified-operators-ws6vd\" (UID: \"392572ea-c2d0-4e58-a206-1755d171539a\") " pod="openshift-marketplace/certified-operators-ws6vd" Nov 25 07:45:59 crc kubenswrapper[5043]: I1125 07:45:59.202113 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrfq2\" (UniqueName: \"kubernetes.io/projected/392572ea-c2d0-4e58-a206-1755d171539a-kube-api-access-zrfq2\") pod \"certified-operators-ws6vd\" (UID: \"392572ea-c2d0-4e58-a206-1755d171539a\") " pod="openshift-marketplace/certified-operators-ws6vd" Nov 25 07:45:59 crc kubenswrapper[5043]: I1125 07:45:59.202253 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/392572ea-c2d0-4e58-a206-1755d171539a-utilities\") pod \"certified-operators-ws6vd\" (UID: \"392572ea-c2d0-4e58-a206-1755d171539a\") " pod="openshift-marketplace/certified-operators-ws6vd" Nov 25 07:45:59 crc kubenswrapper[5043]: I1125 07:45:59.202627 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/392572ea-c2d0-4e58-a206-1755d171539a-catalog-content\") pod \"certified-operators-ws6vd\" (UID: \"392572ea-c2d0-4e58-a206-1755d171539a\") " pod="openshift-marketplace/certified-operators-ws6vd" Nov 25 07:45:59 crc kubenswrapper[5043]: I1125 07:45:59.202804 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/392572ea-c2d0-4e58-a206-1755d171539a-utilities\") pod \"certified-operators-ws6vd\" (UID: \"392572ea-c2d0-4e58-a206-1755d171539a\") " pod="openshift-marketplace/certified-operators-ws6vd" Nov 25 07:45:59 crc kubenswrapper[5043]: I1125 07:45:59.223712 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrfq2\" (UniqueName: \"kubernetes.io/projected/392572ea-c2d0-4e58-a206-1755d171539a-kube-api-access-zrfq2\") pod \"certified-operators-ws6vd\" (UID: \"392572ea-c2d0-4e58-a206-1755d171539a\") " pod="openshift-marketplace/certified-operators-ws6vd" Nov 25 07:45:59 crc kubenswrapper[5043]: I1125 07:45:59.352646 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ws6vd" Nov 25 07:45:59 crc kubenswrapper[5043]: I1125 07:45:59.824706 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ws6vd"] Nov 25 07:45:59 crc kubenswrapper[5043]: W1125 07:45:59.841556 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod392572ea_c2d0_4e58_a206_1755d171539a.slice/crio-18758d824d3f022e234fcae4fa9758fc8c8605747fd465fcb9c44670347c8281 WatchSource:0}: Error finding container 18758d824d3f022e234fcae4fa9758fc8c8605747fd465fcb9c44670347c8281: Status 404 returned error can't find the container with id 18758d824d3f022e234fcae4fa9758fc8c8605747fd465fcb9c44670347c8281 Nov 25 07:46:00 crc kubenswrapper[5043]: I1125 07:46:00.716100 5043 generic.go:334] "Generic (PLEG): container finished" podID="392572ea-c2d0-4e58-a206-1755d171539a" containerID="7338083df46ca28c162e6bfeaefb11dacbabb7e1a3eb2e47b0c9cb6890595ffd" exitCode=0 Nov 25 07:46:00 crc kubenswrapper[5043]: I1125 07:46:00.716196 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ws6vd" event={"ID":"392572ea-c2d0-4e58-a206-1755d171539a","Type":"ContainerDied","Data":"7338083df46ca28c162e6bfeaefb11dacbabb7e1a3eb2e47b0c9cb6890595ffd"} Nov 25 07:46:00 crc kubenswrapper[5043]: I1125 07:46:00.716556 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ws6vd" event={"ID":"392572ea-c2d0-4e58-a206-1755d171539a","Type":"ContainerStarted","Data":"18758d824d3f022e234fcae4fa9758fc8c8605747fd465fcb9c44670347c8281"} Nov 25 07:46:00 crc kubenswrapper[5043]: I1125 07:46:00.728169 5043 generic.go:334] "Generic (PLEG): container finished" podID="d4e7968b-0535-4ef0-990e-55872b901287" containerID="bc4918983ee0267ed600f573a9ea3430b7069c86cef3cc741a6ae0ebd8ea0334" exitCode=0 Nov 25 07:46:00 crc kubenswrapper[5043]: I1125 07:46:00.728255 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" event={"ID":"d4e7968b-0535-4ef0-990e-55872b901287","Type":"ContainerDied","Data":"bc4918983ee0267ed600f573a9ea3430b7069c86cef3cc741a6ae0ebd8ea0334"} Nov 25 07:46:01 crc kubenswrapper[5043]: I1125 07:46:01.740925 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ws6vd" event={"ID":"392572ea-c2d0-4e58-a206-1755d171539a","Type":"ContainerStarted","Data":"13cf5448826fb63687649f681ba6ac784d8219c24df57d58be5d5da4b8b2f379"} Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.137005 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.180131 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lptvw\" (UniqueName: \"kubernetes.io/projected/d4e7968b-0535-4ef0-990e-55872b901287-kube-api-access-lptvw\") pod \"d4e7968b-0535-4ef0-990e-55872b901287\" (UID: \"d4e7968b-0535-4ef0-990e-55872b901287\") " Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.180223 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4e7968b-0535-4ef0-990e-55872b901287-inventory\") pod \"d4e7968b-0535-4ef0-990e-55872b901287\" (UID: \"d4e7968b-0535-4ef0-990e-55872b901287\") " Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.180264 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4e7968b-0535-4ef0-990e-55872b901287-ssh-key\") pod \"d4e7968b-0535-4ef0-990e-55872b901287\" (UID: \"d4e7968b-0535-4ef0-990e-55872b901287\") " Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.188310 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4e7968b-0535-4ef0-990e-55872b901287-kube-api-access-lptvw" (OuterVolumeSpecName: "kube-api-access-lptvw") pod "d4e7968b-0535-4ef0-990e-55872b901287" (UID: "d4e7968b-0535-4ef0-990e-55872b901287"). InnerVolumeSpecName "kube-api-access-lptvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.203341 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e7968b-0535-4ef0-990e-55872b901287-inventory" (OuterVolumeSpecName: "inventory") pod "d4e7968b-0535-4ef0-990e-55872b901287" (UID: "d4e7968b-0535-4ef0-990e-55872b901287"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.207638 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e7968b-0535-4ef0-990e-55872b901287-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d4e7968b-0535-4ef0-990e-55872b901287" (UID: "d4e7968b-0535-4ef0-990e-55872b901287"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.282938 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lptvw\" (UniqueName: \"kubernetes.io/projected/d4e7968b-0535-4ef0-990e-55872b901287-kube-api-access-lptvw\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.283163 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4e7968b-0535-4ef0-990e-55872b901287-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.283254 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4e7968b-0535-4ef0-990e-55872b901287-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.753204 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" event={"ID":"d4e7968b-0535-4ef0-990e-55872b901287","Type":"ContainerDied","Data":"9a471280486c0e48437e24a569463d08f7e30ea69b58061eb4a49e5797d7a691"} Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.754719 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a471280486c0e48437e24a569463d08f7e30ea69b58061eb4a49e5797d7a691" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.754179 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.756819 5043 generic.go:334] "Generic (PLEG): container finished" podID="392572ea-c2d0-4e58-a206-1755d171539a" containerID="13cf5448826fb63687649f681ba6ac784d8219c24df57d58be5d5da4b8b2f379" exitCode=0 Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.756846 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ws6vd" event={"ID":"392572ea-c2d0-4e58-a206-1755d171539a","Type":"ContainerDied","Data":"13cf5448826fb63687649f681ba6ac784d8219c24df57d58be5d5da4b8b2f379"} Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.862479 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jlzg9"] Nov 25 07:46:02 crc kubenswrapper[5043]: E1125 07:46:02.862913 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e7968b-0535-4ef0-990e-55872b901287" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.862928 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e7968b-0535-4ef0-990e-55872b901287" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.863111 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4e7968b-0535-4ef0-990e-55872b901287" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.863645 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.865713 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.865821 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.865956 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.866001 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.882782 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jlzg9"] Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.901944 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1504e796-1193-4c5c-a0a1-426c9b1b0702-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jlzg9\" (UID: \"1504e796-1193-4c5c-a0a1-426c9b1b0702\") " pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.901990 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-999bj\" (UniqueName: \"kubernetes.io/projected/1504e796-1193-4c5c-a0a1-426c9b1b0702-kube-api-access-999bj\") pod \"ssh-known-hosts-edpm-deployment-jlzg9\" (UID: \"1504e796-1193-4c5c-a0a1-426c9b1b0702\") " pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" Nov 25 07:46:02 crc kubenswrapper[5043]: I1125 07:46:02.902171 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1504e796-1193-4c5c-a0a1-426c9b1b0702-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jlzg9\" (UID: \"1504e796-1193-4c5c-a0a1-426c9b1b0702\") " pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" Nov 25 07:46:03 crc kubenswrapper[5043]: I1125 07:46:03.003518 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1504e796-1193-4c5c-a0a1-426c9b1b0702-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jlzg9\" (UID: \"1504e796-1193-4c5c-a0a1-426c9b1b0702\") " pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" Nov 25 07:46:03 crc kubenswrapper[5043]: I1125 07:46:03.003564 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-999bj\" (UniqueName: \"kubernetes.io/projected/1504e796-1193-4c5c-a0a1-426c9b1b0702-kube-api-access-999bj\") pod \"ssh-known-hosts-edpm-deployment-jlzg9\" (UID: \"1504e796-1193-4c5c-a0a1-426c9b1b0702\") " pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" Nov 25 07:46:03 crc kubenswrapper[5043]: I1125 07:46:03.003771 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1504e796-1193-4c5c-a0a1-426c9b1b0702-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jlzg9\" (UID: \"1504e796-1193-4c5c-a0a1-426c9b1b0702\") " pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" Nov 25 07:46:03 crc kubenswrapper[5043]: I1125 07:46:03.008434 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1504e796-1193-4c5c-a0a1-426c9b1b0702-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jlzg9\" (UID: \"1504e796-1193-4c5c-a0a1-426c9b1b0702\") " pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" Nov 25 07:46:03 crc kubenswrapper[5043]: I1125 07:46:03.009151 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1504e796-1193-4c5c-a0a1-426c9b1b0702-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jlzg9\" (UID: \"1504e796-1193-4c5c-a0a1-426c9b1b0702\") " pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" Nov 25 07:46:03 crc kubenswrapper[5043]: I1125 07:46:03.023529 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-999bj\" (UniqueName: \"kubernetes.io/projected/1504e796-1193-4c5c-a0a1-426c9b1b0702-kube-api-access-999bj\") pod \"ssh-known-hosts-edpm-deployment-jlzg9\" (UID: \"1504e796-1193-4c5c-a0a1-426c9b1b0702\") " pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" Nov 25 07:46:03 crc kubenswrapper[5043]: I1125 07:46:03.181412 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" Nov 25 07:46:03 crc kubenswrapper[5043]: I1125 07:46:03.784387 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ws6vd" event={"ID":"392572ea-c2d0-4e58-a206-1755d171539a","Type":"ContainerStarted","Data":"6e48c30876a44b152c62b452955e4520f133e88a65c3b09034bb7cf4928ac8c2"} Nov 25 07:46:03 crc kubenswrapper[5043]: I1125 07:46:03.827685 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ws6vd" podStartSLOduration=3.319327562 podStartE2EDuration="5.827667521s" podCreationTimestamp="2025-11-25 07:45:58 +0000 UTC" firstStartedPulling="2025-11-25 07:46:00.718531201 +0000 UTC m=+1824.886726962" lastFinishedPulling="2025-11-25 07:46:03.2268712 +0000 UTC m=+1827.395066921" observedRunningTime="2025-11-25 07:46:03.821590871 +0000 UTC m=+1827.989786612" watchObservedRunningTime="2025-11-25 07:46:03.827667521 +0000 UTC m=+1827.995863252" Nov 25 07:46:03 crc kubenswrapper[5043]: I1125 07:46:03.963836 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:46:03 crc kubenswrapper[5043]: E1125 07:46:03.964279 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:46:04 crc kubenswrapper[5043]: I1125 07:46:04.433785 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jlzg9"] Nov 25 07:46:04 crc kubenswrapper[5043]: I1125 07:46:04.791939 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" event={"ID":"1504e796-1193-4c5c-a0a1-426c9b1b0702","Type":"ContainerStarted","Data":"0d6fcf747d93ce2ca6b158015035342437fec42e9a5a5d9687663d36189c7737"} Nov 25 07:46:05 crc kubenswrapper[5043]: I1125 07:46:05.806051 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" event={"ID":"1504e796-1193-4c5c-a0a1-426c9b1b0702","Type":"ContainerStarted","Data":"512b8c9486348df649743ee995641b8272a8998df958db3475d0edd50e56083b"} Nov 25 07:46:05 crc kubenswrapper[5043]: I1125 07:46:05.831355 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" podStartSLOduration=3.353649114 podStartE2EDuration="3.831334055s" podCreationTimestamp="2025-11-25 07:46:02 +0000 UTC" firstStartedPulling="2025-11-25 07:46:04.445314258 +0000 UTC m=+1828.613509979" lastFinishedPulling="2025-11-25 07:46:04.922999189 +0000 UTC m=+1829.091194920" observedRunningTime="2025-11-25 07:46:05.824931535 +0000 UTC m=+1829.993127256" watchObservedRunningTime="2025-11-25 07:46:05.831334055 +0000 UTC m=+1829.999529786" Nov 25 07:46:09 crc kubenswrapper[5043]: I1125 07:46:09.354292 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ws6vd" Nov 25 07:46:09 crc kubenswrapper[5043]: I1125 07:46:09.356229 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ws6vd" Nov 25 07:46:09 crc kubenswrapper[5043]: I1125 07:46:09.434429 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ws6vd" Nov 25 07:46:09 crc kubenswrapper[5043]: I1125 07:46:09.901364 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ws6vd" Nov 25 07:46:09 crc kubenswrapper[5043]: I1125 07:46:09.942113 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ws6vd"] Nov 25 07:46:11 crc kubenswrapper[5043]: I1125 07:46:11.870317 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ws6vd" podUID="392572ea-c2d0-4e58-a206-1755d171539a" containerName="registry-server" containerID="cri-o://6e48c30876a44b152c62b452955e4520f133e88a65c3b09034bb7cf4928ac8c2" gracePeriod=2 Nov 25 07:46:12 crc kubenswrapper[5043]: E1125 07:46:12.221684 5043 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1504e796_1193_4c5c_a0a1_426c9b1b0702.slice/crio-512b8c9486348df649743ee995641b8272a8998df958db3475d0edd50e56083b.scope\": RecentStats: unable to find data in memory cache]" Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.318056 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ws6vd" Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.490876 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrfq2\" (UniqueName: \"kubernetes.io/projected/392572ea-c2d0-4e58-a206-1755d171539a-kube-api-access-zrfq2\") pod \"392572ea-c2d0-4e58-a206-1755d171539a\" (UID: \"392572ea-c2d0-4e58-a206-1755d171539a\") " Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.491036 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/392572ea-c2d0-4e58-a206-1755d171539a-utilities\") pod \"392572ea-c2d0-4e58-a206-1755d171539a\" (UID: \"392572ea-c2d0-4e58-a206-1755d171539a\") " Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.491185 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/392572ea-c2d0-4e58-a206-1755d171539a-catalog-content\") pod \"392572ea-c2d0-4e58-a206-1755d171539a\" (UID: \"392572ea-c2d0-4e58-a206-1755d171539a\") " Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.492163 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/392572ea-c2d0-4e58-a206-1755d171539a-utilities" (OuterVolumeSpecName: "utilities") pod "392572ea-c2d0-4e58-a206-1755d171539a" (UID: "392572ea-c2d0-4e58-a206-1755d171539a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.496777 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392572ea-c2d0-4e58-a206-1755d171539a-kube-api-access-zrfq2" (OuterVolumeSpecName: "kube-api-access-zrfq2") pod "392572ea-c2d0-4e58-a206-1755d171539a" (UID: "392572ea-c2d0-4e58-a206-1755d171539a"). InnerVolumeSpecName "kube-api-access-zrfq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.535668 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/392572ea-c2d0-4e58-a206-1755d171539a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "392572ea-c2d0-4e58-a206-1755d171539a" (UID: "392572ea-c2d0-4e58-a206-1755d171539a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.593871 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrfq2\" (UniqueName: \"kubernetes.io/projected/392572ea-c2d0-4e58-a206-1755d171539a-kube-api-access-zrfq2\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.593904 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/392572ea-c2d0-4e58-a206-1755d171539a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.593915 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/392572ea-c2d0-4e58-a206-1755d171539a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.883394 5043 generic.go:334] "Generic (PLEG): container finished" podID="1504e796-1193-4c5c-a0a1-426c9b1b0702" containerID="512b8c9486348df649743ee995641b8272a8998df958db3475d0edd50e56083b" exitCode=0 Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.883514 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" event={"ID":"1504e796-1193-4c5c-a0a1-426c9b1b0702","Type":"ContainerDied","Data":"512b8c9486348df649743ee995641b8272a8998df958db3475d0edd50e56083b"} Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.887186 5043 generic.go:334] "Generic (PLEG): container finished" podID="392572ea-c2d0-4e58-a206-1755d171539a" containerID="6e48c30876a44b152c62b452955e4520f133e88a65c3b09034bb7cf4928ac8c2" exitCode=0 Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.887239 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ws6vd" event={"ID":"392572ea-c2d0-4e58-a206-1755d171539a","Type":"ContainerDied","Data":"6e48c30876a44b152c62b452955e4520f133e88a65c3b09034bb7cf4928ac8c2"} Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.887274 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ws6vd" Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.887303 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ws6vd" event={"ID":"392572ea-c2d0-4e58-a206-1755d171539a","Type":"ContainerDied","Data":"18758d824d3f022e234fcae4fa9758fc8c8605747fd465fcb9c44670347c8281"} Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.887336 5043 scope.go:117] "RemoveContainer" containerID="6e48c30876a44b152c62b452955e4520f133e88a65c3b09034bb7cf4928ac8c2" Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.921785 5043 scope.go:117] "RemoveContainer" containerID="13cf5448826fb63687649f681ba6ac784d8219c24df57d58be5d5da4b8b2f379" Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.950411 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ws6vd"] Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.962147 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ws6vd"] Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.963740 5043 scope.go:117] "RemoveContainer" containerID="7338083df46ca28c162e6bfeaefb11dacbabb7e1a3eb2e47b0c9cb6890595ffd" Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.976912 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="392572ea-c2d0-4e58-a206-1755d171539a" path="/var/lib/kubelet/pods/392572ea-c2d0-4e58-a206-1755d171539a/volumes" Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.998144 5043 scope.go:117] "RemoveContainer" containerID="6e48c30876a44b152c62b452955e4520f133e88a65c3b09034bb7cf4928ac8c2" Nov 25 07:46:12 crc kubenswrapper[5043]: E1125 07:46:12.998530 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e48c30876a44b152c62b452955e4520f133e88a65c3b09034bb7cf4928ac8c2\": container with ID starting with 6e48c30876a44b152c62b452955e4520f133e88a65c3b09034bb7cf4928ac8c2 not found: ID does not exist" containerID="6e48c30876a44b152c62b452955e4520f133e88a65c3b09034bb7cf4928ac8c2" Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.998627 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e48c30876a44b152c62b452955e4520f133e88a65c3b09034bb7cf4928ac8c2"} err="failed to get container status \"6e48c30876a44b152c62b452955e4520f133e88a65c3b09034bb7cf4928ac8c2\": rpc error: code = NotFound desc = could not find container \"6e48c30876a44b152c62b452955e4520f133e88a65c3b09034bb7cf4928ac8c2\": container with ID starting with 6e48c30876a44b152c62b452955e4520f133e88a65c3b09034bb7cf4928ac8c2 not found: ID does not exist" Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.998710 5043 scope.go:117] "RemoveContainer" containerID="13cf5448826fb63687649f681ba6ac784d8219c24df57d58be5d5da4b8b2f379" Nov 25 07:46:12 crc kubenswrapper[5043]: E1125 07:46:12.998981 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13cf5448826fb63687649f681ba6ac784d8219c24df57d58be5d5da4b8b2f379\": container with ID starting with 13cf5448826fb63687649f681ba6ac784d8219c24df57d58be5d5da4b8b2f379 not found: ID does not exist" containerID="13cf5448826fb63687649f681ba6ac784d8219c24df57d58be5d5da4b8b2f379" Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.999048 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13cf5448826fb63687649f681ba6ac784d8219c24df57d58be5d5da4b8b2f379"} err="failed to get container status \"13cf5448826fb63687649f681ba6ac784d8219c24df57d58be5d5da4b8b2f379\": rpc error: code = NotFound desc = could not find container \"13cf5448826fb63687649f681ba6ac784d8219c24df57d58be5d5da4b8b2f379\": container with ID starting with 13cf5448826fb63687649f681ba6ac784d8219c24df57d58be5d5da4b8b2f379 not found: ID does not exist" Nov 25 07:46:12 crc kubenswrapper[5043]: I1125 07:46:12.999103 5043 scope.go:117] "RemoveContainer" containerID="7338083df46ca28c162e6bfeaefb11dacbabb7e1a3eb2e47b0c9cb6890595ffd" Nov 25 07:46:13 crc kubenswrapper[5043]: E1125 07:46:13.000047 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7338083df46ca28c162e6bfeaefb11dacbabb7e1a3eb2e47b0c9cb6890595ffd\": container with ID starting with 7338083df46ca28c162e6bfeaefb11dacbabb7e1a3eb2e47b0c9cb6890595ffd not found: ID does not exist" containerID="7338083df46ca28c162e6bfeaefb11dacbabb7e1a3eb2e47b0c9cb6890595ffd" Nov 25 07:46:13 crc kubenswrapper[5043]: I1125 07:46:13.000284 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7338083df46ca28c162e6bfeaefb11dacbabb7e1a3eb2e47b0c9cb6890595ffd"} err="failed to get container status \"7338083df46ca28c162e6bfeaefb11dacbabb7e1a3eb2e47b0c9cb6890595ffd\": rpc error: code = NotFound desc = could not find container \"7338083df46ca28c162e6bfeaefb11dacbabb7e1a3eb2e47b0c9cb6890595ffd\": container with ID starting with 7338083df46ca28c162e6bfeaefb11dacbabb7e1a3eb2e47b0c9cb6890595ffd not found: ID does not exist" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.377074 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.533520 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1504e796-1193-4c5c-a0a1-426c9b1b0702-inventory-0\") pod \"1504e796-1193-4c5c-a0a1-426c9b1b0702\" (UID: \"1504e796-1193-4c5c-a0a1-426c9b1b0702\") " Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.533642 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1504e796-1193-4c5c-a0a1-426c9b1b0702-ssh-key-openstack-edpm-ipam\") pod \"1504e796-1193-4c5c-a0a1-426c9b1b0702\" (UID: \"1504e796-1193-4c5c-a0a1-426c9b1b0702\") " Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.533677 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-999bj\" (UniqueName: \"kubernetes.io/projected/1504e796-1193-4c5c-a0a1-426c9b1b0702-kube-api-access-999bj\") pod \"1504e796-1193-4c5c-a0a1-426c9b1b0702\" (UID: \"1504e796-1193-4c5c-a0a1-426c9b1b0702\") " Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.541315 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1504e796-1193-4c5c-a0a1-426c9b1b0702-kube-api-access-999bj" (OuterVolumeSpecName: "kube-api-access-999bj") pod "1504e796-1193-4c5c-a0a1-426c9b1b0702" (UID: "1504e796-1193-4c5c-a0a1-426c9b1b0702"). InnerVolumeSpecName "kube-api-access-999bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.560530 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1504e796-1193-4c5c-a0a1-426c9b1b0702-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1504e796-1193-4c5c-a0a1-426c9b1b0702" (UID: "1504e796-1193-4c5c-a0a1-426c9b1b0702"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.574251 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1504e796-1193-4c5c-a0a1-426c9b1b0702-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "1504e796-1193-4c5c-a0a1-426c9b1b0702" (UID: "1504e796-1193-4c5c-a0a1-426c9b1b0702"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.635287 5043 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/1504e796-1193-4c5c-a0a1-426c9b1b0702-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.635325 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1504e796-1193-4c5c-a0a1-426c9b1b0702-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.635341 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-999bj\" (UniqueName: \"kubernetes.io/projected/1504e796-1193-4c5c-a0a1-426c9b1b0702-kube-api-access-999bj\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.912590 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" event={"ID":"1504e796-1193-4c5c-a0a1-426c9b1b0702","Type":"ContainerDied","Data":"0d6fcf747d93ce2ca6b158015035342437fec42e9a5a5d9687663d36189c7737"} Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.912654 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d6fcf747d93ce2ca6b158015035342437fec42e9a5a5d9687663d36189c7737" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.912661 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jlzg9" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.984103 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv"] Nov 25 07:46:14 crc kubenswrapper[5043]: E1125 07:46:14.984448 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1504e796-1193-4c5c-a0a1-426c9b1b0702" containerName="ssh-known-hosts-edpm-deployment" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.984465 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="1504e796-1193-4c5c-a0a1-426c9b1b0702" containerName="ssh-known-hosts-edpm-deployment" Nov 25 07:46:14 crc kubenswrapper[5043]: E1125 07:46:14.984478 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392572ea-c2d0-4e58-a206-1755d171539a" containerName="registry-server" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.984484 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="392572ea-c2d0-4e58-a206-1755d171539a" containerName="registry-server" Nov 25 07:46:14 crc kubenswrapper[5043]: E1125 07:46:14.984499 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392572ea-c2d0-4e58-a206-1755d171539a" containerName="extract-utilities" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.984505 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="392572ea-c2d0-4e58-a206-1755d171539a" containerName="extract-utilities" Nov 25 07:46:14 crc kubenswrapper[5043]: E1125 07:46:14.984517 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392572ea-c2d0-4e58-a206-1755d171539a" containerName="extract-content" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.984524 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="392572ea-c2d0-4e58-a206-1755d171539a" containerName="extract-content" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.987802 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="392572ea-c2d0-4e58-a206-1755d171539a" containerName="registry-server" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.987852 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="1504e796-1193-4c5c-a0a1-426c9b1b0702" containerName="ssh-known-hosts-edpm-deployment" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.988415 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.991195 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.991941 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.992197 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:46:14 crc kubenswrapper[5043]: I1125 07:46:14.994626 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:46:15 crc kubenswrapper[5043]: I1125 07:46:15.000391 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv"] Nov 25 07:46:15 crc kubenswrapper[5043]: I1125 07:46:15.045587 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8299fcc4-ab8b-4f5c-997d-699bf3910311-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-74jzv\" (UID: \"8299fcc4-ab8b-4f5c-997d-699bf3910311\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" Nov 25 07:46:15 crc kubenswrapper[5043]: I1125 07:46:15.045821 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8299fcc4-ab8b-4f5c-997d-699bf3910311-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-74jzv\" (UID: \"8299fcc4-ab8b-4f5c-997d-699bf3910311\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" Nov 25 07:46:15 crc kubenswrapper[5043]: I1125 07:46:15.045872 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n565\" (UniqueName: \"kubernetes.io/projected/8299fcc4-ab8b-4f5c-997d-699bf3910311-kube-api-access-8n565\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-74jzv\" (UID: \"8299fcc4-ab8b-4f5c-997d-699bf3910311\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" Nov 25 07:46:15 crc kubenswrapper[5043]: I1125 07:46:15.147096 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8299fcc4-ab8b-4f5c-997d-699bf3910311-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-74jzv\" (UID: \"8299fcc4-ab8b-4f5c-997d-699bf3910311\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" Nov 25 07:46:15 crc kubenswrapper[5043]: I1125 07:46:15.147221 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8299fcc4-ab8b-4f5c-997d-699bf3910311-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-74jzv\" (UID: \"8299fcc4-ab8b-4f5c-997d-699bf3910311\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" Nov 25 07:46:15 crc kubenswrapper[5043]: I1125 07:46:15.147262 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n565\" (UniqueName: \"kubernetes.io/projected/8299fcc4-ab8b-4f5c-997d-699bf3910311-kube-api-access-8n565\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-74jzv\" (UID: \"8299fcc4-ab8b-4f5c-997d-699bf3910311\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" Nov 25 07:46:15 crc kubenswrapper[5043]: I1125 07:46:15.153454 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8299fcc4-ab8b-4f5c-997d-699bf3910311-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-74jzv\" (UID: \"8299fcc4-ab8b-4f5c-997d-699bf3910311\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" Nov 25 07:46:15 crc kubenswrapper[5043]: I1125 07:46:15.165241 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8299fcc4-ab8b-4f5c-997d-699bf3910311-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-74jzv\" (UID: \"8299fcc4-ab8b-4f5c-997d-699bf3910311\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" Nov 25 07:46:15 crc kubenswrapper[5043]: I1125 07:46:15.177303 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n565\" (UniqueName: \"kubernetes.io/projected/8299fcc4-ab8b-4f5c-997d-699bf3910311-kube-api-access-8n565\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-74jzv\" (UID: \"8299fcc4-ab8b-4f5c-997d-699bf3910311\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" Nov 25 07:46:15 crc kubenswrapper[5043]: I1125 07:46:15.303561 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" Nov 25 07:46:15 crc kubenswrapper[5043]: I1125 07:46:15.850477 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv"] Nov 25 07:46:15 crc kubenswrapper[5043]: I1125 07:46:15.924875 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" event={"ID":"8299fcc4-ab8b-4f5c-997d-699bf3910311","Type":"ContainerStarted","Data":"43e1fc80a849a734dcb185d24fcb010ead1440b3609ec1bb68531d1331895cad"} Nov 25 07:46:16 crc kubenswrapper[5043]: I1125 07:46:16.949885 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" event={"ID":"8299fcc4-ab8b-4f5c-997d-699bf3910311","Type":"ContainerStarted","Data":"1447dacc23de9a738a86c1b92174489f127d097a238a6f41b4de3d16bfa6029e"} Nov 25 07:46:16 crc kubenswrapper[5043]: I1125 07:46:16.963241 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:46:16 crc kubenswrapper[5043]: E1125 07:46:16.965217 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:46:16 crc kubenswrapper[5043]: I1125 07:46:16.976161 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" podStartSLOduration=2.543429986 podStartE2EDuration="2.976144225s" podCreationTimestamp="2025-11-25 07:46:14 +0000 UTC" firstStartedPulling="2025-11-25 07:46:15.839020381 +0000 UTC m=+1840.007216122" lastFinishedPulling="2025-11-25 07:46:16.27173458 +0000 UTC m=+1840.439930361" observedRunningTime="2025-11-25 07:46:16.967836715 +0000 UTC m=+1841.136032456" watchObservedRunningTime="2025-11-25 07:46:16.976144225 +0000 UTC m=+1841.144339946" Nov 25 07:46:25 crc kubenswrapper[5043]: I1125 07:46:25.037977 5043 generic.go:334] "Generic (PLEG): container finished" podID="8299fcc4-ab8b-4f5c-997d-699bf3910311" containerID="1447dacc23de9a738a86c1b92174489f127d097a238a6f41b4de3d16bfa6029e" exitCode=0 Nov 25 07:46:25 crc kubenswrapper[5043]: I1125 07:46:25.038133 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" event={"ID":"8299fcc4-ab8b-4f5c-997d-699bf3910311","Type":"ContainerDied","Data":"1447dacc23de9a738a86c1b92174489f127d097a238a6f41b4de3d16bfa6029e"} Nov 25 07:46:25 crc kubenswrapper[5043]: I1125 07:46:25.067490 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-wzdq9"] Nov 25 07:46:25 crc kubenswrapper[5043]: I1125 07:46:25.083747 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-wzdq9"] Nov 25 07:46:26 crc kubenswrapper[5043]: I1125 07:46:26.462144 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" Nov 25 07:46:26 crc kubenswrapper[5043]: I1125 07:46:26.573044 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8299fcc4-ab8b-4f5c-997d-699bf3910311-ssh-key\") pod \"8299fcc4-ab8b-4f5c-997d-699bf3910311\" (UID: \"8299fcc4-ab8b-4f5c-997d-699bf3910311\") " Nov 25 07:46:26 crc kubenswrapper[5043]: I1125 07:46:26.573456 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n565\" (UniqueName: \"kubernetes.io/projected/8299fcc4-ab8b-4f5c-997d-699bf3910311-kube-api-access-8n565\") pod \"8299fcc4-ab8b-4f5c-997d-699bf3910311\" (UID: \"8299fcc4-ab8b-4f5c-997d-699bf3910311\") " Nov 25 07:46:26 crc kubenswrapper[5043]: I1125 07:46:26.573510 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8299fcc4-ab8b-4f5c-997d-699bf3910311-inventory\") pod \"8299fcc4-ab8b-4f5c-997d-699bf3910311\" (UID: \"8299fcc4-ab8b-4f5c-997d-699bf3910311\") " Nov 25 07:46:26 crc kubenswrapper[5043]: I1125 07:46:26.580507 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8299fcc4-ab8b-4f5c-997d-699bf3910311-kube-api-access-8n565" (OuterVolumeSpecName: "kube-api-access-8n565") pod "8299fcc4-ab8b-4f5c-997d-699bf3910311" (UID: "8299fcc4-ab8b-4f5c-997d-699bf3910311"). InnerVolumeSpecName "kube-api-access-8n565". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:46:26 crc kubenswrapper[5043]: I1125 07:46:26.615357 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8299fcc4-ab8b-4f5c-997d-699bf3910311-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8299fcc4-ab8b-4f5c-997d-699bf3910311" (UID: "8299fcc4-ab8b-4f5c-997d-699bf3910311"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:46:26 crc kubenswrapper[5043]: I1125 07:46:26.626419 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8299fcc4-ab8b-4f5c-997d-699bf3910311-inventory" (OuterVolumeSpecName: "inventory") pod "8299fcc4-ab8b-4f5c-997d-699bf3910311" (UID: "8299fcc4-ab8b-4f5c-997d-699bf3910311"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:46:26 crc kubenswrapper[5043]: I1125 07:46:26.676159 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8299fcc4-ab8b-4f5c-997d-699bf3910311-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:26 crc kubenswrapper[5043]: I1125 07:46:26.676210 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8299fcc4-ab8b-4f5c-997d-699bf3910311-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:26 crc kubenswrapper[5043]: I1125 07:46:26.676231 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n565\" (UniqueName: \"kubernetes.io/projected/8299fcc4-ab8b-4f5c-997d-699bf3910311-kube-api-access-8n565\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:26 crc kubenswrapper[5043]: I1125 07:46:26.974973 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc9f9c54-9408-4021-b741-ffd7c2f49f60" path="/var/lib/kubelet/pods/bc9f9c54-9408-4021-b741-ffd7c2f49f60/volumes" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.056619 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" event={"ID":"8299fcc4-ab8b-4f5c-997d-699bf3910311","Type":"ContainerDied","Data":"43e1fc80a849a734dcb185d24fcb010ead1440b3609ec1bb68531d1331895cad"} Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.056666 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43e1fc80a849a734dcb185d24fcb010ead1440b3609ec1bb68531d1331895cad" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.057068 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.156231 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9"] Nov 25 07:46:27 crc kubenswrapper[5043]: E1125 07:46:27.157570 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8299fcc4-ab8b-4f5c-997d-699bf3910311" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.157731 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8299fcc4-ab8b-4f5c-997d-699bf3910311" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.158172 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="8299fcc4-ab8b-4f5c-997d-699bf3910311" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.159442 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.161275 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.161810 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.161839 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.161847 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.169080 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9"] Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.287646 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhjtt\" (UniqueName: \"kubernetes.io/projected/bea4e7e9-3d87-4933-ab50-bfc57646622d-kube-api-access-dhjtt\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9\" (UID: \"bea4e7e9-3d87-4933-ab50-bfc57646622d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.288127 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bea4e7e9-3d87-4933-ab50-bfc57646622d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9\" (UID: \"bea4e7e9-3d87-4933-ab50-bfc57646622d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.288348 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bea4e7e9-3d87-4933-ab50-bfc57646622d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9\" (UID: \"bea4e7e9-3d87-4933-ab50-bfc57646622d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.389941 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bea4e7e9-3d87-4933-ab50-bfc57646622d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9\" (UID: \"bea4e7e9-3d87-4933-ab50-bfc57646622d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.390066 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhjtt\" (UniqueName: \"kubernetes.io/projected/bea4e7e9-3d87-4933-ab50-bfc57646622d-kube-api-access-dhjtt\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9\" (UID: \"bea4e7e9-3d87-4933-ab50-bfc57646622d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.390213 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bea4e7e9-3d87-4933-ab50-bfc57646622d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9\" (UID: \"bea4e7e9-3d87-4933-ab50-bfc57646622d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.397236 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bea4e7e9-3d87-4933-ab50-bfc57646622d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9\" (UID: \"bea4e7e9-3d87-4933-ab50-bfc57646622d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.399211 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bea4e7e9-3d87-4933-ab50-bfc57646622d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9\" (UID: \"bea4e7e9-3d87-4933-ab50-bfc57646622d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.416104 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhjtt\" (UniqueName: \"kubernetes.io/projected/bea4e7e9-3d87-4933-ab50-bfc57646622d-kube-api-access-dhjtt\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9\" (UID: \"bea4e7e9-3d87-4933-ab50-bfc57646622d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.482735 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" Nov 25 07:46:27 crc kubenswrapper[5043]: I1125 07:46:27.901033 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9"] Nov 25 07:46:27 crc kubenswrapper[5043]: W1125 07:46:27.908114 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbea4e7e9_3d87_4933_ab50_bfc57646622d.slice/crio-f622cb45cf8e0afb11d26fc0b37d86fcfae03073ed783c2b6cd8603b1772eae2 WatchSource:0}: Error finding container f622cb45cf8e0afb11d26fc0b37d86fcfae03073ed783c2b6cd8603b1772eae2: Status 404 returned error can't find the container with id f622cb45cf8e0afb11d26fc0b37d86fcfae03073ed783c2b6cd8603b1772eae2 Nov 25 07:46:28 crc kubenswrapper[5043]: I1125 07:46:28.074931 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" event={"ID":"bea4e7e9-3d87-4933-ab50-bfc57646622d","Type":"ContainerStarted","Data":"f622cb45cf8e0afb11d26fc0b37d86fcfae03073ed783c2b6cd8603b1772eae2"} Nov 25 07:46:29 crc kubenswrapper[5043]: I1125 07:46:29.083816 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" event={"ID":"bea4e7e9-3d87-4933-ab50-bfc57646622d","Type":"ContainerStarted","Data":"fc34be8edc0c22997dd77919dd55c12f5e6b326beda52beaf63f27f05dfe1d8f"} Nov 25 07:46:29 crc kubenswrapper[5043]: I1125 07:46:29.114294 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" podStartSLOduration=1.719446296 podStartE2EDuration="2.114271903s" podCreationTimestamp="2025-11-25 07:46:27 +0000 UTC" firstStartedPulling="2025-11-25 07:46:27.91040319 +0000 UTC m=+1852.078598931" lastFinishedPulling="2025-11-25 07:46:28.305228787 +0000 UTC m=+1852.473424538" observedRunningTime="2025-11-25 07:46:29.101348811 +0000 UTC m=+1853.269544552" watchObservedRunningTime="2025-11-25 07:46:29.114271903 +0000 UTC m=+1853.282467624" Nov 25 07:46:30 crc kubenswrapper[5043]: I1125 07:46:30.962695 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:46:32 crc kubenswrapper[5043]: I1125 07:46:32.139796 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"7b33dac25f840f4655adb9773800d9ac479fdf0da60d9d9474d21c037b7a5eed"} Nov 25 07:46:39 crc kubenswrapper[5043]: I1125 07:46:39.217570 5043 generic.go:334] "Generic (PLEG): container finished" podID="bea4e7e9-3d87-4933-ab50-bfc57646622d" containerID="fc34be8edc0c22997dd77919dd55c12f5e6b326beda52beaf63f27f05dfe1d8f" exitCode=0 Nov 25 07:46:39 crc kubenswrapper[5043]: I1125 07:46:39.217656 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" event={"ID":"bea4e7e9-3d87-4933-ab50-bfc57646622d","Type":"ContainerDied","Data":"fc34be8edc0c22997dd77919dd55c12f5e6b326beda52beaf63f27f05dfe1d8f"} Nov 25 07:46:40 crc kubenswrapper[5043]: I1125 07:46:40.708403 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" Nov 25 07:46:40 crc kubenswrapper[5043]: I1125 07:46:40.849778 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bea4e7e9-3d87-4933-ab50-bfc57646622d-inventory\") pod \"bea4e7e9-3d87-4933-ab50-bfc57646622d\" (UID: \"bea4e7e9-3d87-4933-ab50-bfc57646622d\") " Nov 25 07:46:40 crc kubenswrapper[5043]: I1125 07:46:40.850009 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bea4e7e9-3d87-4933-ab50-bfc57646622d-ssh-key\") pod \"bea4e7e9-3d87-4933-ab50-bfc57646622d\" (UID: \"bea4e7e9-3d87-4933-ab50-bfc57646622d\") " Nov 25 07:46:40 crc kubenswrapper[5043]: I1125 07:46:40.850098 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhjtt\" (UniqueName: \"kubernetes.io/projected/bea4e7e9-3d87-4933-ab50-bfc57646622d-kube-api-access-dhjtt\") pod \"bea4e7e9-3d87-4933-ab50-bfc57646622d\" (UID: \"bea4e7e9-3d87-4933-ab50-bfc57646622d\") " Nov 25 07:46:40 crc kubenswrapper[5043]: I1125 07:46:40.860694 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bea4e7e9-3d87-4933-ab50-bfc57646622d-kube-api-access-dhjtt" (OuterVolumeSpecName: "kube-api-access-dhjtt") pod "bea4e7e9-3d87-4933-ab50-bfc57646622d" (UID: "bea4e7e9-3d87-4933-ab50-bfc57646622d"). InnerVolumeSpecName "kube-api-access-dhjtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:46:40 crc kubenswrapper[5043]: I1125 07:46:40.876346 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea4e7e9-3d87-4933-ab50-bfc57646622d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bea4e7e9-3d87-4933-ab50-bfc57646622d" (UID: "bea4e7e9-3d87-4933-ab50-bfc57646622d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:46:40 crc kubenswrapper[5043]: I1125 07:46:40.898308 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea4e7e9-3d87-4933-ab50-bfc57646622d-inventory" (OuterVolumeSpecName: "inventory") pod "bea4e7e9-3d87-4933-ab50-bfc57646622d" (UID: "bea4e7e9-3d87-4933-ab50-bfc57646622d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:46:40 crc kubenswrapper[5043]: I1125 07:46:40.952309 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bea4e7e9-3d87-4933-ab50-bfc57646622d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:40 crc kubenswrapper[5043]: I1125 07:46:40.952349 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhjtt\" (UniqueName: \"kubernetes.io/projected/bea4e7e9-3d87-4933-ab50-bfc57646622d-kube-api-access-dhjtt\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:40 crc kubenswrapper[5043]: I1125 07:46:40.952363 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bea4e7e9-3d87-4933-ab50-bfc57646622d-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:46:41 crc kubenswrapper[5043]: I1125 07:46:41.241239 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" event={"ID":"bea4e7e9-3d87-4933-ab50-bfc57646622d","Type":"ContainerDied","Data":"f622cb45cf8e0afb11d26fc0b37d86fcfae03073ed783c2b6cd8603b1772eae2"} Nov 25 07:46:41 crc kubenswrapper[5043]: I1125 07:46:41.241698 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f622cb45cf8e0afb11d26fc0b37d86fcfae03073ed783c2b6cd8603b1772eae2" Nov 25 07:46:41 crc kubenswrapper[5043]: I1125 07:46:41.241361 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9" Nov 25 07:46:50 crc kubenswrapper[5043]: I1125 07:46:50.976591 5043 scope.go:117] "RemoveContainer" containerID="09a4616f6b79a50e29307112f9cf545b0bca42c4532b28686880865375b7296d" Nov 25 07:48:47 crc kubenswrapper[5043]: I1125 07:48:47.277043 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:48:47 crc kubenswrapper[5043]: I1125 07:48:47.277617 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:49:17 crc kubenswrapper[5043]: I1125 07:49:17.276695 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:49:17 crc kubenswrapper[5043]: I1125 07:49:17.277285 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:49:47 crc kubenswrapper[5043]: I1125 07:49:47.276472 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:49:47 crc kubenswrapper[5043]: I1125 07:49:47.277898 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:49:47 crc kubenswrapper[5043]: I1125 07:49:47.277998 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:49:47 crc kubenswrapper[5043]: I1125 07:49:47.278709 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b33dac25f840f4655adb9773800d9ac479fdf0da60d9d9474d21c037b7a5eed"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:49:47 crc kubenswrapper[5043]: I1125 07:49:47.278828 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://7b33dac25f840f4655adb9773800d9ac479fdf0da60d9d9474d21c037b7a5eed" gracePeriod=600 Nov 25 07:49:48 crc kubenswrapper[5043]: I1125 07:49:48.235740 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="7b33dac25f840f4655adb9773800d9ac479fdf0da60d9d9474d21c037b7a5eed" exitCode=0 Nov 25 07:49:48 crc kubenswrapper[5043]: I1125 07:49:48.235824 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"7b33dac25f840f4655adb9773800d9ac479fdf0da60d9d9474d21c037b7a5eed"} Nov 25 07:49:48 crc kubenswrapper[5043]: I1125 07:49:48.236472 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590"} Nov 25 07:49:48 crc kubenswrapper[5043]: I1125 07:49:48.236503 5043 scope.go:117] "RemoveContainer" containerID="389d8440ec2678d50e5b497f796e903923dfe293e6dd96184c27d3fc7594a474" Nov 25 07:50:06 crc kubenswrapper[5043]: I1125 07:50:06.685601 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q7gvj"] Nov 25 07:50:06 crc kubenswrapper[5043]: E1125 07:50:06.686781 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea4e7e9-3d87-4933-ab50-bfc57646622d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:50:06 crc kubenswrapper[5043]: I1125 07:50:06.686796 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea4e7e9-3d87-4933-ab50-bfc57646622d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:50:06 crc kubenswrapper[5043]: I1125 07:50:06.687037 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea4e7e9-3d87-4933-ab50-bfc57646622d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:50:06 crc kubenswrapper[5043]: I1125 07:50:06.688542 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7gvj" Nov 25 07:50:06 crc kubenswrapper[5043]: I1125 07:50:06.696145 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7gvj"] Nov 25 07:50:06 crc kubenswrapper[5043]: I1125 07:50:06.751308 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbxwp\" (UniqueName: \"kubernetes.io/projected/e58138b5-d38c-45f2-9427-9bfddc009ff9-kube-api-access-pbxwp\") pod \"redhat-operators-q7gvj\" (UID: \"e58138b5-d38c-45f2-9427-9bfddc009ff9\") " pod="openshift-marketplace/redhat-operators-q7gvj" Nov 25 07:50:06 crc kubenswrapper[5043]: I1125 07:50:06.751475 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58138b5-d38c-45f2-9427-9bfddc009ff9-catalog-content\") pod \"redhat-operators-q7gvj\" (UID: \"e58138b5-d38c-45f2-9427-9bfddc009ff9\") " pod="openshift-marketplace/redhat-operators-q7gvj" Nov 25 07:50:06 crc kubenswrapper[5043]: I1125 07:50:06.751639 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58138b5-d38c-45f2-9427-9bfddc009ff9-utilities\") pod \"redhat-operators-q7gvj\" (UID: \"e58138b5-d38c-45f2-9427-9bfddc009ff9\") " pod="openshift-marketplace/redhat-operators-q7gvj" Nov 25 07:50:06 crc kubenswrapper[5043]: I1125 07:50:06.853973 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58138b5-d38c-45f2-9427-9bfddc009ff9-utilities\") pod \"redhat-operators-q7gvj\" (UID: \"e58138b5-d38c-45f2-9427-9bfddc009ff9\") " pod="openshift-marketplace/redhat-operators-q7gvj" Nov 25 07:50:06 crc kubenswrapper[5043]: I1125 07:50:06.854684 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58138b5-d38c-45f2-9427-9bfddc009ff9-utilities\") pod \"redhat-operators-q7gvj\" (UID: \"e58138b5-d38c-45f2-9427-9bfddc009ff9\") " pod="openshift-marketplace/redhat-operators-q7gvj" Nov 25 07:50:06 crc kubenswrapper[5043]: I1125 07:50:06.855219 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbxwp\" (UniqueName: \"kubernetes.io/projected/e58138b5-d38c-45f2-9427-9bfddc009ff9-kube-api-access-pbxwp\") pod \"redhat-operators-q7gvj\" (UID: \"e58138b5-d38c-45f2-9427-9bfddc009ff9\") " pod="openshift-marketplace/redhat-operators-q7gvj" Nov 25 07:50:06 crc kubenswrapper[5043]: I1125 07:50:06.855709 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58138b5-d38c-45f2-9427-9bfddc009ff9-catalog-content\") pod \"redhat-operators-q7gvj\" (UID: \"e58138b5-d38c-45f2-9427-9bfddc009ff9\") " pod="openshift-marketplace/redhat-operators-q7gvj" Nov 25 07:50:06 crc kubenswrapper[5043]: I1125 07:50:06.856079 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58138b5-d38c-45f2-9427-9bfddc009ff9-catalog-content\") pod \"redhat-operators-q7gvj\" (UID: \"e58138b5-d38c-45f2-9427-9bfddc009ff9\") " pod="openshift-marketplace/redhat-operators-q7gvj" Nov 25 07:50:06 crc kubenswrapper[5043]: I1125 07:50:06.875136 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbxwp\" (UniqueName: \"kubernetes.io/projected/e58138b5-d38c-45f2-9427-9bfddc009ff9-kube-api-access-pbxwp\") pod \"redhat-operators-q7gvj\" (UID: \"e58138b5-d38c-45f2-9427-9bfddc009ff9\") " pod="openshift-marketplace/redhat-operators-q7gvj" Nov 25 07:50:07 crc kubenswrapper[5043]: I1125 07:50:07.021393 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7gvj" Nov 25 07:50:07 crc kubenswrapper[5043]: I1125 07:50:07.517022 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7gvj"] Nov 25 07:50:08 crc kubenswrapper[5043]: I1125 07:50:08.440077 5043 generic.go:334] "Generic (PLEG): container finished" podID="e58138b5-d38c-45f2-9427-9bfddc009ff9" containerID="856cf1e2d63fe0dbe95c443b6fed2abe64e8997cf6825a4fe407533ad0f2f14b" exitCode=0 Nov 25 07:50:08 crc kubenswrapper[5043]: I1125 07:50:08.440131 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7gvj" event={"ID":"e58138b5-d38c-45f2-9427-9bfddc009ff9","Type":"ContainerDied","Data":"856cf1e2d63fe0dbe95c443b6fed2abe64e8997cf6825a4fe407533ad0f2f14b"} Nov 25 07:50:08 crc kubenswrapper[5043]: I1125 07:50:08.440346 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7gvj" event={"ID":"e58138b5-d38c-45f2-9427-9bfddc009ff9","Type":"ContainerStarted","Data":"c90f1c761f9422434df32dfacd411aa9929f1ec2cebea2e5eb5712540d9d77d1"} Nov 25 07:50:08 crc kubenswrapper[5043]: I1125 07:50:08.444239 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 07:50:09 crc kubenswrapper[5043]: I1125 07:50:09.453516 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7gvj" event={"ID":"e58138b5-d38c-45f2-9427-9bfddc009ff9","Type":"ContainerStarted","Data":"77c5939544187ccba52dbffb9f256ba1801ee8106fa2eb34c7187e48e5849782"} Nov 25 07:50:10 crc kubenswrapper[5043]: I1125 07:50:10.467902 5043 generic.go:334] "Generic (PLEG): container finished" podID="e58138b5-d38c-45f2-9427-9bfddc009ff9" containerID="77c5939544187ccba52dbffb9f256ba1801ee8106fa2eb34c7187e48e5849782" exitCode=0 Nov 25 07:50:10 crc kubenswrapper[5043]: I1125 07:50:10.467954 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7gvj" event={"ID":"e58138b5-d38c-45f2-9427-9bfddc009ff9","Type":"ContainerDied","Data":"77c5939544187ccba52dbffb9f256ba1801ee8106fa2eb34c7187e48e5849782"} Nov 25 07:50:11 crc kubenswrapper[5043]: I1125 07:50:11.480065 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7gvj" event={"ID":"e58138b5-d38c-45f2-9427-9bfddc009ff9","Type":"ContainerStarted","Data":"16330e9f4b270fac5fc44a84e45f899deb972b3d19e9c55a438a7cf8d3ac1249"} Nov 25 07:50:16 crc kubenswrapper[5043]: I1125 07:50:16.325586 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q7gvj" podStartSLOduration=7.889190615 podStartE2EDuration="10.325560745s" podCreationTimestamp="2025-11-25 07:50:06 +0000 UTC" firstStartedPulling="2025-11-25 07:50:08.444016897 +0000 UTC m=+2072.612212618" lastFinishedPulling="2025-11-25 07:50:10.880387017 +0000 UTC m=+2075.048582748" observedRunningTime="2025-11-25 07:50:11.503462397 +0000 UTC m=+2075.671658118" watchObservedRunningTime="2025-11-25 07:50:16.325560745 +0000 UTC m=+2080.493756496" Nov 25 07:50:16 crc kubenswrapper[5043]: I1125 07:50:16.333049 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-98q4n"] Nov 25 07:50:16 crc kubenswrapper[5043]: I1125 07:50:16.335130 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98q4n" Nov 25 07:50:16 crc kubenswrapper[5043]: I1125 07:50:16.340895 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-98q4n"] Nov 25 07:50:16 crc kubenswrapper[5043]: I1125 07:50:16.429195 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-catalog-content\") pod \"community-operators-98q4n\" (UID: \"1f9baa6b-bc1e-4591-bc6b-df75740bdc75\") " pod="openshift-marketplace/community-operators-98q4n" Nov 25 07:50:16 crc kubenswrapper[5043]: I1125 07:50:16.429316 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-utilities\") pod \"community-operators-98q4n\" (UID: \"1f9baa6b-bc1e-4591-bc6b-df75740bdc75\") " pod="openshift-marketplace/community-operators-98q4n" Nov 25 07:50:16 crc kubenswrapper[5043]: I1125 07:50:16.429340 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc7kt\" (UniqueName: \"kubernetes.io/projected/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-kube-api-access-nc7kt\") pod \"community-operators-98q4n\" (UID: \"1f9baa6b-bc1e-4591-bc6b-df75740bdc75\") " pod="openshift-marketplace/community-operators-98q4n" Nov 25 07:50:16 crc kubenswrapper[5043]: I1125 07:50:16.531073 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-utilities\") pod \"community-operators-98q4n\" (UID: \"1f9baa6b-bc1e-4591-bc6b-df75740bdc75\") " pod="openshift-marketplace/community-operators-98q4n" Nov 25 07:50:16 crc kubenswrapper[5043]: I1125 07:50:16.531134 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc7kt\" (UniqueName: \"kubernetes.io/projected/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-kube-api-access-nc7kt\") pod \"community-operators-98q4n\" (UID: \"1f9baa6b-bc1e-4591-bc6b-df75740bdc75\") " pod="openshift-marketplace/community-operators-98q4n" Nov 25 07:50:16 crc kubenswrapper[5043]: I1125 07:50:16.531235 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-catalog-content\") pod \"community-operators-98q4n\" (UID: \"1f9baa6b-bc1e-4591-bc6b-df75740bdc75\") " pod="openshift-marketplace/community-operators-98q4n" Nov 25 07:50:16 crc kubenswrapper[5043]: I1125 07:50:16.531713 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-catalog-content\") pod \"community-operators-98q4n\" (UID: \"1f9baa6b-bc1e-4591-bc6b-df75740bdc75\") " pod="openshift-marketplace/community-operators-98q4n" Nov 25 07:50:16 crc kubenswrapper[5043]: I1125 07:50:16.531729 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-utilities\") pod \"community-operators-98q4n\" (UID: \"1f9baa6b-bc1e-4591-bc6b-df75740bdc75\") " pod="openshift-marketplace/community-operators-98q4n" Nov 25 07:50:16 crc kubenswrapper[5043]: I1125 07:50:16.552664 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc7kt\" (UniqueName: \"kubernetes.io/projected/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-kube-api-access-nc7kt\") pod \"community-operators-98q4n\" (UID: \"1f9baa6b-bc1e-4591-bc6b-df75740bdc75\") " pod="openshift-marketplace/community-operators-98q4n" Nov 25 07:50:16 crc kubenswrapper[5043]: I1125 07:50:16.670600 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98q4n" Nov 25 07:50:17 crc kubenswrapper[5043]: I1125 07:50:17.022516 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q7gvj" Nov 25 07:50:17 crc kubenswrapper[5043]: I1125 07:50:17.022953 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q7gvj" Nov 25 07:50:17 crc kubenswrapper[5043]: I1125 07:50:17.091662 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q7gvj" Nov 25 07:50:17 crc kubenswrapper[5043]: I1125 07:50:17.220287 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-98q4n"] Nov 25 07:50:17 crc kubenswrapper[5043]: I1125 07:50:17.537439 5043 generic.go:334] "Generic (PLEG): container finished" podID="1f9baa6b-bc1e-4591-bc6b-df75740bdc75" containerID="2522cb0fa8fe58c1365a9a04758aafdb07330609dda8aa9bb1e08fe576f79891" exitCode=0 Nov 25 07:50:17 crc kubenswrapper[5043]: I1125 07:50:17.537528 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98q4n" event={"ID":"1f9baa6b-bc1e-4591-bc6b-df75740bdc75","Type":"ContainerDied","Data":"2522cb0fa8fe58c1365a9a04758aafdb07330609dda8aa9bb1e08fe576f79891"} Nov 25 07:50:17 crc kubenswrapper[5043]: I1125 07:50:17.537785 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98q4n" event={"ID":"1f9baa6b-bc1e-4591-bc6b-df75740bdc75","Type":"ContainerStarted","Data":"b2de34578bd2ce55fb45523c33aa34f5a13b3fb92e28c829865250269a2343bc"} Nov 25 07:50:17 crc kubenswrapper[5043]: I1125 07:50:17.590031 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q7gvj" Nov 25 07:50:19 crc kubenswrapper[5043]: I1125 07:50:19.509825 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7gvj"] Nov 25 07:50:19 crc kubenswrapper[5043]: I1125 07:50:19.558262 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q7gvj" podUID="e58138b5-d38c-45f2-9427-9bfddc009ff9" containerName="registry-server" containerID="cri-o://16330e9f4b270fac5fc44a84e45f899deb972b3d19e9c55a438a7cf8d3ac1249" gracePeriod=2 Nov 25 07:50:19 crc kubenswrapper[5043]: I1125 07:50:19.558779 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98q4n" event={"ID":"1f9baa6b-bc1e-4591-bc6b-df75740bdc75","Type":"ContainerStarted","Data":"34873e6c2c1ad7e8a82e24841dbbdb5650f756097dff9abf5fe7316ef1abd09f"} Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.512189 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7gvj" Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.569110 5043 generic.go:334] "Generic (PLEG): container finished" podID="1f9baa6b-bc1e-4591-bc6b-df75740bdc75" containerID="34873e6c2c1ad7e8a82e24841dbbdb5650f756097dff9abf5fe7316ef1abd09f" exitCode=0 Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.569198 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98q4n" event={"ID":"1f9baa6b-bc1e-4591-bc6b-df75740bdc75","Type":"ContainerDied","Data":"34873e6c2c1ad7e8a82e24841dbbdb5650f756097dff9abf5fe7316ef1abd09f"} Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.573906 5043 generic.go:334] "Generic (PLEG): container finished" podID="e58138b5-d38c-45f2-9427-9bfddc009ff9" containerID="16330e9f4b270fac5fc44a84e45f899deb972b3d19e9c55a438a7cf8d3ac1249" exitCode=0 Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.573949 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7gvj" event={"ID":"e58138b5-d38c-45f2-9427-9bfddc009ff9","Type":"ContainerDied","Data":"16330e9f4b270fac5fc44a84e45f899deb972b3d19e9c55a438a7cf8d3ac1249"} Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.573983 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7gvj" event={"ID":"e58138b5-d38c-45f2-9427-9bfddc009ff9","Type":"ContainerDied","Data":"c90f1c761f9422434df32dfacd411aa9929f1ec2cebea2e5eb5712540d9d77d1"} Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.574007 5043 scope.go:117] "RemoveContainer" containerID="16330e9f4b270fac5fc44a84e45f899deb972b3d19e9c55a438a7cf8d3ac1249" Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.574071 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7gvj" Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.601846 5043 scope.go:117] "RemoveContainer" containerID="77c5939544187ccba52dbffb9f256ba1801ee8106fa2eb34c7187e48e5849782" Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.610067 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbxwp\" (UniqueName: \"kubernetes.io/projected/e58138b5-d38c-45f2-9427-9bfddc009ff9-kube-api-access-pbxwp\") pod \"e58138b5-d38c-45f2-9427-9bfddc009ff9\" (UID: \"e58138b5-d38c-45f2-9427-9bfddc009ff9\") " Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.610313 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58138b5-d38c-45f2-9427-9bfddc009ff9-catalog-content\") pod \"e58138b5-d38c-45f2-9427-9bfddc009ff9\" (UID: \"e58138b5-d38c-45f2-9427-9bfddc009ff9\") " Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.610367 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58138b5-d38c-45f2-9427-9bfddc009ff9-utilities\") pod \"e58138b5-d38c-45f2-9427-9bfddc009ff9\" (UID: \"e58138b5-d38c-45f2-9427-9bfddc009ff9\") " Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.612012 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e58138b5-d38c-45f2-9427-9bfddc009ff9-utilities" (OuterVolumeSpecName: "utilities") pod "e58138b5-d38c-45f2-9427-9bfddc009ff9" (UID: "e58138b5-d38c-45f2-9427-9bfddc009ff9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.618433 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e58138b5-d38c-45f2-9427-9bfddc009ff9-kube-api-access-pbxwp" (OuterVolumeSpecName: "kube-api-access-pbxwp") pod "e58138b5-d38c-45f2-9427-9bfddc009ff9" (UID: "e58138b5-d38c-45f2-9427-9bfddc009ff9"). InnerVolumeSpecName "kube-api-access-pbxwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.620469 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58138b5-d38c-45f2-9427-9bfddc009ff9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.620503 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbxwp\" (UniqueName: \"kubernetes.io/projected/e58138b5-d38c-45f2-9427-9bfddc009ff9-kube-api-access-pbxwp\") on node \"crc\" DevicePath \"\"" Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.625681 5043 scope.go:117] "RemoveContainer" containerID="856cf1e2d63fe0dbe95c443b6fed2abe64e8997cf6825a4fe407533ad0f2f14b" Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.718864 5043 scope.go:117] "RemoveContainer" containerID="16330e9f4b270fac5fc44a84e45f899deb972b3d19e9c55a438a7cf8d3ac1249" Nov 25 07:50:20 crc kubenswrapper[5043]: E1125 07:50:20.719913 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16330e9f4b270fac5fc44a84e45f899deb972b3d19e9c55a438a7cf8d3ac1249\": container with ID starting with 16330e9f4b270fac5fc44a84e45f899deb972b3d19e9c55a438a7cf8d3ac1249 not found: ID does not exist" containerID="16330e9f4b270fac5fc44a84e45f899deb972b3d19e9c55a438a7cf8d3ac1249" Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.719955 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16330e9f4b270fac5fc44a84e45f899deb972b3d19e9c55a438a7cf8d3ac1249"} err="failed to get container status \"16330e9f4b270fac5fc44a84e45f899deb972b3d19e9c55a438a7cf8d3ac1249\": rpc error: code = NotFound desc = could not find container \"16330e9f4b270fac5fc44a84e45f899deb972b3d19e9c55a438a7cf8d3ac1249\": container with ID starting with 16330e9f4b270fac5fc44a84e45f899deb972b3d19e9c55a438a7cf8d3ac1249 not found: ID does not exist" Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.719981 5043 scope.go:117] "RemoveContainer" containerID="77c5939544187ccba52dbffb9f256ba1801ee8106fa2eb34c7187e48e5849782" Nov 25 07:50:20 crc kubenswrapper[5043]: E1125 07:50:20.720371 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77c5939544187ccba52dbffb9f256ba1801ee8106fa2eb34c7187e48e5849782\": container with ID starting with 77c5939544187ccba52dbffb9f256ba1801ee8106fa2eb34c7187e48e5849782 not found: ID does not exist" containerID="77c5939544187ccba52dbffb9f256ba1801ee8106fa2eb34c7187e48e5849782" Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.720405 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77c5939544187ccba52dbffb9f256ba1801ee8106fa2eb34c7187e48e5849782"} err="failed to get container status \"77c5939544187ccba52dbffb9f256ba1801ee8106fa2eb34c7187e48e5849782\": rpc error: code = NotFound desc = could not find container \"77c5939544187ccba52dbffb9f256ba1801ee8106fa2eb34c7187e48e5849782\": container with ID starting with 77c5939544187ccba52dbffb9f256ba1801ee8106fa2eb34c7187e48e5849782 not found: ID does not exist" Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.720432 5043 scope.go:117] "RemoveContainer" containerID="856cf1e2d63fe0dbe95c443b6fed2abe64e8997cf6825a4fe407533ad0f2f14b" Nov 25 07:50:20 crc kubenswrapper[5043]: E1125 07:50:20.720669 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"856cf1e2d63fe0dbe95c443b6fed2abe64e8997cf6825a4fe407533ad0f2f14b\": container with ID starting with 856cf1e2d63fe0dbe95c443b6fed2abe64e8997cf6825a4fe407533ad0f2f14b not found: ID does not exist" containerID="856cf1e2d63fe0dbe95c443b6fed2abe64e8997cf6825a4fe407533ad0f2f14b" Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.720690 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"856cf1e2d63fe0dbe95c443b6fed2abe64e8997cf6825a4fe407533ad0f2f14b"} err="failed to get container status \"856cf1e2d63fe0dbe95c443b6fed2abe64e8997cf6825a4fe407533ad0f2f14b\": rpc error: code = NotFound desc = could not find container \"856cf1e2d63fe0dbe95c443b6fed2abe64e8997cf6825a4fe407533ad0f2f14b\": container with ID starting with 856cf1e2d63fe0dbe95c443b6fed2abe64e8997cf6825a4fe407533ad0f2f14b not found: ID does not exist" Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.738153 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e58138b5-d38c-45f2-9427-9bfddc009ff9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e58138b5-d38c-45f2-9427-9bfddc009ff9" (UID: "e58138b5-d38c-45f2-9427-9bfddc009ff9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.823690 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58138b5-d38c-45f2-9427-9bfddc009ff9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.924060 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7gvj"] Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.932012 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q7gvj"] Nov 25 07:50:20 crc kubenswrapper[5043]: I1125 07:50:20.973825 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e58138b5-d38c-45f2-9427-9bfddc009ff9" path="/var/lib/kubelet/pods/e58138b5-d38c-45f2-9427-9bfddc009ff9/volumes" Nov 25 07:50:22 crc kubenswrapper[5043]: I1125 07:50:22.597387 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98q4n" event={"ID":"1f9baa6b-bc1e-4591-bc6b-df75740bdc75","Type":"ContainerStarted","Data":"04e39b961be85eebec1494874a79b2ae2271ea61477e03baea41d2435f3017d7"} Nov 25 07:50:22 crc kubenswrapper[5043]: I1125 07:50:22.618798 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-98q4n" podStartSLOduration=2.646299563 podStartE2EDuration="6.618781933s" podCreationTimestamp="2025-11-25 07:50:16 +0000 UTC" firstStartedPulling="2025-11-25 07:50:17.540327999 +0000 UTC m=+2081.708523720" lastFinishedPulling="2025-11-25 07:50:21.512810369 +0000 UTC m=+2085.681006090" observedRunningTime="2025-11-25 07:50:22.617855258 +0000 UTC m=+2086.786050979" watchObservedRunningTime="2025-11-25 07:50:22.618781933 +0000 UTC m=+2086.786977654" Nov 25 07:50:26 crc kubenswrapper[5043]: I1125 07:50:26.671708 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-98q4n" Nov 25 07:50:26 crc kubenswrapper[5043]: I1125 07:50:26.672292 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-98q4n" Nov 25 07:50:26 crc kubenswrapper[5043]: I1125 07:50:26.744931 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-98q4n" Nov 25 07:50:27 crc kubenswrapper[5043]: I1125 07:50:27.700805 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-98q4n" Nov 25 07:50:27 crc kubenswrapper[5043]: I1125 07:50:27.757808 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-98q4n"] Nov 25 07:50:29 crc kubenswrapper[5043]: I1125 07:50:29.662799 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-98q4n" podUID="1f9baa6b-bc1e-4591-bc6b-df75740bdc75" containerName="registry-server" containerID="cri-o://04e39b961be85eebec1494874a79b2ae2271ea61477e03baea41d2435f3017d7" gracePeriod=2 Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.095917 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98q4n" Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.219266 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-catalog-content\") pod \"1f9baa6b-bc1e-4591-bc6b-df75740bdc75\" (UID: \"1f9baa6b-bc1e-4591-bc6b-df75740bdc75\") " Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.219389 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-utilities\") pod \"1f9baa6b-bc1e-4591-bc6b-df75740bdc75\" (UID: \"1f9baa6b-bc1e-4591-bc6b-df75740bdc75\") " Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.219507 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc7kt\" (UniqueName: \"kubernetes.io/projected/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-kube-api-access-nc7kt\") pod \"1f9baa6b-bc1e-4591-bc6b-df75740bdc75\" (UID: \"1f9baa6b-bc1e-4591-bc6b-df75740bdc75\") " Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.221191 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-utilities" (OuterVolumeSpecName: "utilities") pod "1f9baa6b-bc1e-4591-bc6b-df75740bdc75" (UID: "1f9baa6b-bc1e-4591-bc6b-df75740bdc75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.225219 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-kube-api-access-nc7kt" (OuterVolumeSpecName: "kube-api-access-nc7kt") pod "1f9baa6b-bc1e-4591-bc6b-df75740bdc75" (UID: "1f9baa6b-bc1e-4591-bc6b-df75740bdc75"). InnerVolumeSpecName "kube-api-access-nc7kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.283733 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f9baa6b-bc1e-4591-bc6b-df75740bdc75" (UID: "1f9baa6b-bc1e-4591-bc6b-df75740bdc75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.322170 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc7kt\" (UniqueName: \"kubernetes.io/projected/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-kube-api-access-nc7kt\") on node \"crc\" DevicePath \"\"" Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.322202 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.322212 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f9baa6b-bc1e-4591-bc6b-df75740bdc75-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.678983 5043 generic.go:334] "Generic (PLEG): container finished" podID="1f9baa6b-bc1e-4591-bc6b-df75740bdc75" containerID="04e39b961be85eebec1494874a79b2ae2271ea61477e03baea41d2435f3017d7" exitCode=0 Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.679063 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98q4n" event={"ID":"1f9baa6b-bc1e-4591-bc6b-df75740bdc75","Type":"ContainerDied","Data":"04e39b961be85eebec1494874a79b2ae2271ea61477e03baea41d2435f3017d7"} Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.679117 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-98q4n" event={"ID":"1f9baa6b-bc1e-4591-bc6b-df75740bdc75","Type":"ContainerDied","Data":"b2de34578bd2ce55fb45523c33aa34f5a13b3fb92e28c829865250269a2343bc"} Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.679134 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-98q4n" Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.679161 5043 scope.go:117] "RemoveContainer" containerID="04e39b961be85eebec1494874a79b2ae2271ea61477e03baea41d2435f3017d7" Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.714923 5043 scope.go:117] "RemoveContainer" containerID="34873e6c2c1ad7e8a82e24841dbbdb5650f756097dff9abf5fe7316ef1abd09f" Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.733734 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-98q4n"] Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.741794 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-98q4n"] Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.757983 5043 scope.go:117] "RemoveContainer" containerID="2522cb0fa8fe58c1365a9a04758aafdb07330609dda8aa9bb1e08fe576f79891" Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.794974 5043 scope.go:117] "RemoveContainer" containerID="04e39b961be85eebec1494874a79b2ae2271ea61477e03baea41d2435f3017d7" Nov 25 07:50:30 crc kubenswrapper[5043]: E1125 07:50:30.796122 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04e39b961be85eebec1494874a79b2ae2271ea61477e03baea41d2435f3017d7\": container with ID starting with 04e39b961be85eebec1494874a79b2ae2271ea61477e03baea41d2435f3017d7 not found: ID does not exist" containerID="04e39b961be85eebec1494874a79b2ae2271ea61477e03baea41d2435f3017d7" Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.796256 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04e39b961be85eebec1494874a79b2ae2271ea61477e03baea41d2435f3017d7"} err="failed to get container status \"04e39b961be85eebec1494874a79b2ae2271ea61477e03baea41d2435f3017d7\": rpc error: code = NotFound desc = could not find container \"04e39b961be85eebec1494874a79b2ae2271ea61477e03baea41d2435f3017d7\": container with ID starting with 04e39b961be85eebec1494874a79b2ae2271ea61477e03baea41d2435f3017d7 not found: ID does not exist" Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.796296 5043 scope.go:117] "RemoveContainer" containerID="34873e6c2c1ad7e8a82e24841dbbdb5650f756097dff9abf5fe7316ef1abd09f" Nov 25 07:50:30 crc kubenswrapper[5043]: E1125 07:50:30.796892 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34873e6c2c1ad7e8a82e24841dbbdb5650f756097dff9abf5fe7316ef1abd09f\": container with ID starting with 34873e6c2c1ad7e8a82e24841dbbdb5650f756097dff9abf5fe7316ef1abd09f not found: ID does not exist" containerID="34873e6c2c1ad7e8a82e24841dbbdb5650f756097dff9abf5fe7316ef1abd09f" Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.796936 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34873e6c2c1ad7e8a82e24841dbbdb5650f756097dff9abf5fe7316ef1abd09f"} err="failed to get container status \"34873e6c2c1ad7e8a82e24841dbbdb5650f756097dff9abf5fe7316ef1abd09f\": rpc error: code = NotFound desc = could not find container \"34873e6c2c1ad7e8a82e24841dbbdb5650f756097dff9abf5fe7316ef1abd09f\": container with ID starting with 34873e6c2c1ad7e8a82e24841dbbdb5650f756097dff9abf5fe7316ef1abd09f not found: ID does not exist" Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.796964 5043 scope.go:117] "RemoveContainer" containerID="2522cb0fa8fe58c1365a9a04758aafdb07330609dda8aa9bb1e08fe576f79891" Nov 25 07:50:30 crc kubenswrapper[5043]: E1125 07:50:30.797456 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2522cb0fa8fe58c1365a9a04758aafdb07330609dda8aa9bb1e08fe576f79891\": container with ID starting with 2522cb0fa8fe58c1365a9a04758aafdb07330609dda8aa9bb1e08fe576f79891 not found: ID does not exist" containerID="2522cb0fa8fe58c1365a9a04758aafdb07330609dda8aa9bb1e08fe576f79891" Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.797503 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2522cb0fa8fe58c1365a9a04758aafdb07330609dda8aa9bb1e08fe576f79891"} err="failed to get container status \"2522cb0fa8fe58c1365a9a04758aafdb07330609dda8aa9bb1e08fe576f79891\": rpc error: code = NotFound desc = could not find container \"2522cb0fa8fe58c1365a9a04758aafdb07330609dda8aa9bb1e08fe576f79891\": container with ID starting with 2522cb0fa8fe58c1365a9a04758aafdb07330609dda8aa9bb1e08fe576f79891 not found: ID does not exist" Nov 25 07:50:30 crc kubenswrapper[5043]: I1125 07:50:30.975371 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f9baa6b-bc1e-4591-bc6b-df75740bdc75" path="/var/lib/kubelet/pods/1f9baa6b-bc1e-4591-bc6b-df75740bdc75/volumes" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.124400 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nmwsc"] Nov 25 07:51:44 crc kubenswrapper[5043]: E1125 07:51:44.125820 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e58138b5-d38c-45f2-9427-9bfddc009ff9" containerName="extract-content" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.125847 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e58138b5-d38c-45f2-9427-9bfddc009ff9" containerName="extract-content" Nov 25 07:51:44 crc kubenswrapper[5043]: E1125 07:51:44.125886 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e58138b5-d38c-45f2-9427-9bfddc009ff9" containerName="registry-server" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.125895 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e58138b5-d38c-45f2-9427-9bfddc009ff9" containerName="registry-server" Nov 25 07:51:44 crc kubenswrapper[5043]: E1125 07:51:44.125918 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9baa6b-bc1e-4591-bc6b-df75740bdc75" containerName="extract-utilities" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.125928 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9baa6b-bc1e-4591-bc6b-df75740bdc75" containerName="extract-utilities" Nov 25 07:51:44 crc kubenswrapper[5043]: E1125 07:51:44.125945 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9baa6b-bc1e-4591-bc6b-df75740bdc75" containerName="registry-server" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.125954 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9baa6b-bc1e-4591-bc6b-df75740bdc75" containerName="registry-server" Nov 25 07:51:44 crc kubenswrapper[5043]: E1125 07:51:44.125967 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e58138b5-d38c-45f2-9427-9bfddc009ff9" containerName="extract-utilities" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.125977 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e58138b5-d38c-45f2-9427-9bfddc009ff9" containerName="extract-utilities" Nov 25 07:51:44 crc kubenswrapper[5043]: E1125 07:51:44.125989 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f9baa6b-bc1e-4591-bc6b-df75740bdc75" containerName="extract-content" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.125997 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f9baa6b-bc1e-4591-bc6b-df75740bdc75" containerName="extract-content" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.126235 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="e58138b5-d38c-45f2-9427-9bfddc009ff9" containerName="registry-server" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.126263 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f9baa6b-bc1e-4591-bc6b-df75740bdc75" containerName="registry-server" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.128357 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmwsc" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.147646 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmwsc"] Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.187204 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmhn2\" (UniqueName: \"kubernetes.io/projected/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-kube-api-access-mmhn2\") pod \"redhat-marketplace-nmwsc\" (UID: \"2d7caa5d-fca9-48a8-a049-0c6bc4632d20\") " pod="openshift-marketplace/redhat-marketplace-nmwsc" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.187334 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-utilities\") pod \"redhat-marketplace-nmwsc\" (UID: \"2d7caa5d-fca9-48a8-a049-0c6bc4632d20\") " pod="openshift-marketplace/redhat-marketplace-nmwsc" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.187548 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-catalog-content\") pod \"redhat-marketplace-nmwsc\" (UID: \"2d7caa5d-fca9-48a8-a049-0c6bc4632d20\") " pod="openshift-marketplace/redhat-marketplace-nmwsc" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.288742 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmhn2\" (UniqueName: \"kubernetes.io/projected/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-kube-api-access-mmhn2\") pod \"redhat-marketplace-nmwsc\" (UID: \"2d7caa5d-fca9-48a8-a049-0c6bc4632d20\") " pod="openshift-marketplace/redhat-marketplace-nmwsc" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.288817 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-utilities\") pod \"redhat-marketplace-nmwsc\" (UID: \"2d7caa5d-fca9-48a8-a049-0c6bc4632d20\") " pod="openshift-marketplace/redhat-marketplace-nmwsc" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.288926 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-catalog-content\") pod \"redhat-marketplace-nmwsc\" (UID: \"2d7caa5d-fca9-48a8-a049-0c6bc4632d20\") " pod="openshift-marketplace/redhat-marketplace-nmwsc" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.289443 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-catalog-content\") pod \"redhat-marketplace-nmwsc\" (UID: \"2d7caa5d-fca9-48a8-a049-0c6bc4632d20\") " pod="openshift-marketplace/redhat-marketplace-nmwsc" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.289585 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-utilities\") pod \"redhat-marketplace-nmwsc\" (UID: \"2d7caa5d-fca9-48a8-a049-0c6bc4632d20\") " pod="openshift-marketplace/redhat-marketplace-nmwsc" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.308544 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmhn2\" (UniqueName: \"kubernetes.io/projected/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-kube-api-access-mmhn2\") pod \"redhat-marketplace-nmwsc\" (UID: \"2d7caa5d-fca9-48a8-a049-0c6bc4632d20\") " pod="openshift-marketplace/redhat-marketplace-nmwsc" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.457583 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmwsc" Nov 25 07:51:44 crc kubenswrapper[5043]: I1125 07:51:44.899935 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmwsc"] Nov 25 07:51:45 crc kubenswrapper[5043]: I1125 07:51:45.071486 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmwsc" event={"ID":"2d7caa5d-fca9-48a8-a049-0c6bc4632d20","Type":"ContainerStarted","Data":"2ec8aca2eaa6512f970d6efd578026075f24a8ee22a94d80edbae7c78b413867"} Nov 25 07:51:46 crc kubenswrapper[5043]: I1125 07:51:46.080474 5043 generic.go:334] "Generic (PLEG): container finished" podID="2d7caa5d-fca9-48a8-a049-0c6bc4632d20" containerID="308cac89d402ad957e40e10ef3d21435db0941b923cbba5094e48409e923871e" exitCode=0 Nov 25 07:51:46 crc kubenswrapper[5043]: I1125 07:51:46.080517 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmwsc" event={"ID":"2d7caa5d-fca9-48a8-a049-0c6bc4632d20","Type":"ContainerDied","Data":"308cac89d402ad957e40e10ef3d21435db0941b923cbba5094e48409e923871e"} Nov 25 07:51:47 crc kubenswrapper[5043]: I1125 07:51:47.276999 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:51:47 crc kubenswrapper[5043]: I1125 07:51:47.277593 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:51:48 crc kubenswrapper[5043]: I1125 07:51:48.103839 5043 generic.go:334] "Generic (PLEG): container finished" podID="2d7caa5d-fca9-48a8-a049-0c6bc4632d20" containerID="232c0259197a20f199d96f7fbf68935ef1d195319dd6e48f5374ee85ab5608e6" exitCode=0 Nov 25 07:51:48 crc kubenswrapper[5043]: I1125 07:51:48.103897 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmwsc" event={"ID":"2d7caa5d-fca9-48a8-a049-0c6bc4632d20","Type":"ContainerDied","Data":"232c0259197a20f199d96f7fbf68935ef1d195319dd6e48f5374ee85ab5608e6"} Nov 25 07:51:49 crc kubenswrapper[5043]: I1125 07:51:49.115425 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmwsc" event={"ID":"2d7caa5d-fca9-48a8-a049-0c6bc4632d20","Type":"ContainerStarted","Data":"d0d35a4b890ecd9adfa055f93228d5ac9703a7501f3dccb86c387cfa4f1502ea"} Nov 25 07:51:49 crc kubenswrapper[5043]: I1125 07:51:49.145911 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nmwsc" podStartSLOduration=2.414734536 podStartE2EDuration="5.145886756s" podCreationTimestamp="2025-11-25 07:51:44 +0000 UTC" firstStartedPulling="2025-11-25 07:51:46.082724526 +0000 UTC m=+2170.250920247" lastFinishedPulling="2025-11-25 07:51:48.813876746 +0000 UTC m=+2172.982072467" observedRunningTime="2025-11-25 07:51:49.138261643 +0000 UTC m=+2173.306457404" watchObservedRunningTime="2025-11-25 07:51:49.145886756 +0000 UTC m=+2173.314082477" Nov 25 07:51:54 crc kubenswrapper[5043]: I1125 07:51:54.458158 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nmwsc" Nov 25 07:51:54 crc kubenswrapper[5043]: I1125 07:51:54.458806 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nmwsc" Nov 25 07:51:54 crc kubenswrapper[5043]: I1125 07:51:54.499718 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nmwsc" Nov 25 07:51:55 crc kubenswrapper[5043]: I1125 07:51:55.219797 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nmwsc" Nov 25 07:51:55 crc kubenswrapper[5043]: I1125 07:51:55.295558 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmwsc"] Nov 25 07:51:57 crc kubenswrapper[5043]: I1125 07:51:57.190094 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nmwsc" podUID="2d7caa5d-fca9-48a8-a049-0c6bc4632d20" containerName="registry-server" containerID="cri-o://d0d35a4b890ecd9adfa055f93228d5ac9703a7501f3dccb86c387cfa4f1502ea" gracePeriod=2 Nov 25 07:51:58 crc kubenswrapper[5043]: I1125 07:51:58.201398 5043 generic.go:334] "Generic (PLEG): container finished" podID="2d7caa5d-fca9-48a8-a049-0c6bc4632d20" containerID="d0d35a4b890ecd9adfa055f93228d5ac9703a7501f3dccb86c387cfa4f1502ea" exitCode=0 Nov 25 07:51:58 crc kubenswrapper[5043]: I1125 07:51:58.201484 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmwsc" event={"ID":"2d7caa5d-fca9-48a8-a049-0c6bc4632d20","Type":"ContainerDied","Data":"d0d35a4b890ecd9adfa055f93228d5ac9703a7501f3dccb86c387cfa4f1502ea"} Nov 25 07:51:58 crc kubenswrapper[5043]: I1125 07:51:58.201720 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmwsc" event={"ID":"2d7caa5d-fca9-48a8-a049-0c6bc4632d20","Type":"ContainerDied","Data":"2ec8aca2eaa6512f970d6efd578026075f24a8ee22a94d80edbae7c78b413867"} Nov 25 07:51:58 crc kubenswrapper[5043]: I1125 07:51:58.201733 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ec8aca2eaa6512f970d6efd578026075f24a8ee22a94d80edbae7c78b413867" Nov 25 07:51:58 crc kubenswrapper[5043]: I1125 07:51:58.244998 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmwsc" Nov 25 07:51:58 crc kubenswrapper[5043]: I1125 07:51:58.356811 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmhn2\" (UniqueName: \"kubernetes.io/projected/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-kube-api-access-mmhn2\") pod \"2d7caa5d-fca9-48a8-a049-0c6bc4632d20\" (UID: \"2d7caa5d-fca9-48a8-a049-0c6bc4632d20\") " Nov 25 07:51:58 crc kubenswrapper[5043]: I1125 07:51:58.356888 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-utilities\") pod \"2d7caa5d-fca9-48a8-a049-0c6bc4632d20\" (UID: \"2d7caa5d-fca9-48a8-a049-0c6bc4632d20\") " Nov 25 07:51:58 crc kubenswrapper[5043]: I1125 07:51:58.356966 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-catalog-content\") pod \"2d7caa5d-fca9-48a8-a049-0c6bc4632d20\" (UID: \"2d7caa5d-fca9-48a8-a049-0c6bc4632d20\") " Nov 25 07:51:58 crc kubenswrapper[5043]: I1125 07:51:58.358367 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-utilities" (OuterVolumeSpecName: "utilities") pod "2d7caa5d-fca9-48a8-a049-0c6bc4632d20" (UID: "2d7caa5d-fca9-48a8-a049-0c6bc4632d20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:51:58 crc kubenswrapper[5043]: I1125 07:51:58.367877 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-kube-api-access-mmhn2" (OuterVolumeSpecName: "kube-api-access-mmhn2") pod "2d7caa5d-fca9-48a8-a049-0c6bc4632d20" (UID: "2d7caa5d-fca9-48a8-a049-0c6bc4632d20"). InnerVolumeSpecName "kube-api-access-mmhn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:51:58 crc kubenswrapper[5043]: I1125 07:51:58.459950 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmhn2\" (UniqueName: \"kubernetes.io/projected/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-kube-api-access-mmhn2\") on node \"crc\" DevicePath \"\"" Nov 25 07:51:58 crc kubenswrapper[5043]: I1125 07:51:58.459991 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:51:58 crc kubenswrapper[5043]: I1125 07:51:58.937664 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d7caa5d-fca9-48a8-a049-0c6bc4632d20" (UID: "2d7caa5d-fca9-48a8-a049-0c6bc4632d20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:51:58 crc kubenswrapper[5043]: I1125 07:51:58.968420 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d7caa5d-fca9-48a8-a049-0c6bc4632d20-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:51:59 crc kubenswrapper[5043]: I1125 07:51:59.213377 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmwsc" Nov 25 07:51:59 crc kubenswrapper[5043]: I1125 07:51:59.251126 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmwsc"] Nov 25 07:51:59 crc kubenswrapper[5043]: I1125 07:51:59.272009 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmwsc"] Nov 25 07:52:00 crc kubenswrapper[5043]: I1125 07:52:00.976952 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d7caa5d-fca9-48a8-a049-0c6bc4632d20" path="/var/lib/kubelet/pods/2d7caa5d-fca9-48a8-a049-0c6bc4632d20/volumes" Nov 25 07:52:17 crc kubenswrapper[5043]: I1125 07:52:17.276002 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:52:17 crc kubenswrapper[5043]: I1125 07:52:17.276541 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:52:47 crc kubenswrapper[5043]: I1125 07:52:47.277106 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 07:52:47 crc kubenswrapper[5043]: I1125 07:52:47.277885 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 07:52:47 crc kubenswrapper[5043]: I1125 07:52:47.277952 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 07:52:47 crc kubenswrapper[5043]: I1125 07:52:47.278967 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 07:52:47 crc kubenswrapper[5043]: I1125 07:52:47.279054 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" gracePeriod=600 Nov 25 07:52:47 crc kubenswrapper[5043]: E1125 07:52:47.417007 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:52:47 crc kubenswrapper[5043]: I1125 07:52:47.717054 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" exitCode=0 Nov 25 07:52:47 crc kubenswrapper[5043]: I1125 07:52:47.717118 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590"} Nov 25 07:52:47 crc kubenswrapper[5043]: I1125 07:52:47.717165 5043 scope.go:117] "RemoveContainer" containerID="7b33dac25f840f4655adb9773800d9ac479fdf0da60d9d9474d21c037b7a5eed" Nov 25 07:52:47 crc kubenswrapper[5043]: I1125 07:52:47.718051 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:52:47 crc kubenswrapper[5043]: E1125 07:52:47.718477 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.418290 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.432484 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6zd99"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.439037 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.444641 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.449981 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.455550 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.461339 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.467436 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.473555 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-f95x7"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.479642 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.485029 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.490166 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jlzg9"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.496402 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g6wmf"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.503268 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-zc7qs"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.510695 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-qlnqm"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.517655 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8jrh2"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.524776 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jlzg9"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.531133 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-74jzv"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.538747 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z8lc9"] Nov 25 07:52:55 crc kubenswrapper[5043]: I1125 07:52:55.544253 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-fcv9z"] Nov 25 07:52:56 crc kubenswrapper[5043]: I1125 07:52:56.979302 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1504e796-1193-4c5c-a0a1-426c9b1b0702" path="/var/lib/kubelet/pods/1504e796-1193-4c5c-a0a1-426c9b1b0702/volumes" Nov 25 07:52:56 crc kubenswrapper[5043]: I1125 07:52:56.980847 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a36c998-6b38-47b9-954d-3fd54b9bbecb" path="/var/lib/kubelet/pods/4a36c998-6b38-47b9-954d-3fd54b9bbecb/volumes" Nov 25 07:52:56 crc kubenswrapper[5043]: I1125 07:52:56.982966 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4edb7473-0040-4944-aff9-fb0a7588d84f" path="/var/lib/kubelet/pods/4edb7473-0040-4944-aff9-fb0a7588d84f/volumes" Nov 25 07:52:56 crc kubenswrapper[5043]: I1125 07:52:56.983796 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8299fcc4-ab8b-4f5c-997d-699bf3910311" path="/var/lib/kubelet/pods/8299fcc4-ab8b-4f5c-997d-699bf3910311/volumes" Nov 25 07:52:56 crc kubenswrapper[5043]: I1125 07:52:56.984692 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87449fe9-18ed-4139-aab1-d0c7c0afa5b0" path="/var/lib/kubelet/pods/87449fe9-18ed-4139-aab1-d0c7c0afa5b0/volumes" Nov 25 07:52:56 crc kubenswrapper[5043]: I1125 07:52:56.985388 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bea4e7e9-3d87-4933-ab50-bfc57646622d" path="/var/lib/kubelet/pods/bea4e7e9-3d87-4933-ab50-bfc57646622d/volumes" Nov 25 07:52:56 crc kubenswrapper[5043]: I1125 07:52:56.986658 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4e7968b-0535-4ef0-990e-55872b901287" path="/var/lib/kubelet/pods/d4e7968b-0535-4ef0-990e-55872b901287/volumes" Nov 25 07:52:56 crc kubenswrapper[5043]: I1125 07:52:56.987386 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1768074-dbe7-4cd0-b646-d4cb304ee5b4" path="/var/lib/kubelet/pods/f1768074-dbe7-4cd0-b646-d4cb304ee5b4/volumes" Nov 25 07:52:56 crc kubenswrapper[5043]: I1125 07:52:56.988119 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe182205-f0dc-4ca1-9110-ca21d5e49620" path="/var/lib/kubelet/pods/fe182205-f0dc-4ca1-9110-ca21d5e49620/volumes" Nov 25 07:52:56 crc kubenswrapper[5043]: I1125 07:52:56.989343 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff9438d8-bf96-477b-8e33-f7031940fff7" path="/var/lib/kubelet/pods/ff9438d8-bf96-477b-8e33-f7031940fff7/volumes" Nov 25 07:52:58 crc kubenswrapper[5043]: I1125 07:52:58.963075 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:52:58 crc kubenswrapper[5043]: E1125 07:52:58.964167 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.565763 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5"] Nov 25 07:53:01 crc kubenswrapper[5043]: E1125 07:53:01.567397 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7caa5d-fca9-48a8-a049-0c6bc4632d20" containerName="extract-utilities" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.567494 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7caa5d-fca9-48a8-a049-0c6bc4632d20" containerName="extract-utilities" Nov 25 07:53:01 crc kubenswrapper[5043]: E1125 07:53:01.567580 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7caa5d-fca9-48a8-a049-0c6bc4632d20" containerName="extract-content" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.567684 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7caa5d-fca9-48a8-a049-0c6bc4632d20" containerName="extract-content" Nov 25 07:53:01 crc kubenswrapper[5043]: E1125 07:53:01.567779 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7caa5d-fca9-48a8-a049-0c6bc4632d20" containerName="registry-server" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.567853 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7caa5d-fca9-48a8-a049-0c6bc4632d20" containerName="registry-server" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.568166 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d7caa5d-fca9-48a8-a049-0c6bc4632d20" containerName="registry-server" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.569044 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.572414 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.572561 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.572760 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.572885 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.572891 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.587598 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5"] Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.689265 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.689357 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.689393 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.689436 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.689496 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtzjl\" (UniqueName: \"kubernetes.io/projected/ff6ea425-8f08-4513-a444-ff524369c066-kube-api-access-vtzjl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.790903 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.791231 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.791471 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtzjl\" (UniqueName: \"kubernetes.io/projected/ff6ea425-8f08-4513-a444-ff524369c066-kube-api-access-vtzjl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.791692 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.791930 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.797360 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.798502 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.798983 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.802565 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.810400 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtzjl\" (UniqueName: \"kubernetes.io/projected/ff6ea425-8f08-4513-a444-ff524369c066-kube-api-access-vtzjl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:01 crc kubenswrapper[5043]: I1125 07:53:01.891923 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:02 crc kubenswrapper[5043]: I1125 07:53:02.423354 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5"] Nov 25 07:53:02 crc kubenswrapper[5043]: I1125 07:53:02.855280 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" event={"ID":"ff6ea425-8f08-4513-a444-ff524369c066","Type":"ContainerStarted","Data":"945a5dc161eb14256bcef1973e6844141b94ecb6935a598edc6d56ea80d25838"} Nov 25 07:53:03 crc kubenswrapper[5043]: I1125 07:53:03.867482 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" event={"ID":"ff6ea425-8f08-4513-a444-ff524369c066","Type":"ContainerStarted","Data":"d1830595782ef694571c2af65913f7e8c0904fd9b20fda184d296d8898e64e89"} Nov 25 07:53:03 crc kubenswrapper[5043]: I1125 07:53:03.898071 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" podStartSLOduration=2.473547443 podStartE2EDuration="2.898021211s" podCreationTimestamp="2025-11-25 07:53:01 +0000 UTC" firstStartedPulling="2025-11-25 07:53:02.444672404 +0000 UTC m=+2246.612868115" lastFinishedPulling="2025-11-25 07:53:02.869146112 +0000 UTC m=+2247.037341883" observedRunningTime="2025-11-25 07:53:03.895461953 +0000 UTC m=+2248.063657694" watchObservedRunningTime="2025-11-25 07:53:03.898021211 +0000 UTC m=+2248.066216932" Nov 25 07:53:13 crc kubenswrapper[5043]: I1125 07:53:13.962846 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:53:13 crc kubenswrapper[5043]: E1125 07:53:13.963584 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:53:16 crc kubenswrapper[5043]: I1125 07:53:16.985692 5043 generic.go:334] "Generic (PLEG): container finished" podID="ff6ea425-8f08-4513-a444-ff524369c066" containerID="d1830595782ef694571c2af65913f7e8c0904fd9b20fda184d296d8898e64e89" exitCode=0 Nov 25 07:53:16 crc kubenswrapper[5043]: I1125 07:53:16.985808 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" event={"ID":"ff6ea425-8f08-4513-a444-ff524369c066","Type":"ContainerDied","Data":"d1830595782ef694571c2af65913f7e8c0904fd9b20fda184d296d8898e64e89"} Nov 25 07:53:18 crc kubenswrapper[5043]: I1125 07:53:18.409879 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:18 crc kubenswrapper[5043]: I1125 07:53:18.504330 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-inventory\") pod \"ff6ea425-8f08-4513-a444-ff524369c066\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " Nov 25 07:53:18 crc kubenswrapper[5043]: I1125 07:53:18.504933 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-repo-setup-combined-ca-bundle\") pod \"ff6ea425-8f08-4513-a444-ff524369c066\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " Nov 25 07:53:18 crc kubenswrapper[5043]: I1125 07:53:18.505151 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-ssh-key\") pod \"ff6ea425-8f08-4513-a444-ff524369c066\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " Nov 25 07:53:18 crc kubenswrapper[5043]: I1125 07:53:18.505240 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-ceph\") pod \"ff6ea425-8f08-4513-a444-ff524369c066\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " Nov 25 07:53:18 crc kubenswrapper[5043]: I1125 07:53:18.505293 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtzjl\" (UniqueName: \"kubernetes.io/projected/ff6ea425-8f08-4513-a444-ff524369c066-kube-api-access-vtzjl\") pod \"ff6ea425-8f08-4513-a444-ff524369c066\" (UID: \"ff6ea425-8f08-4513-a444-ff524369c066\") " Nov 25 07:53:18 crc kubenswrapper[5043]: I1125 07:53:18.513999 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ff6ea425-8f08-4513-a444-ff524369c066" (UID: "ff6ea425-8f08-4513-a444-ff524369c066"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:53:18 crc kubenswrapper[5043]: I1125 07:53:18.516337 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff6ea425-8f08-4513-a444-ff524369c066-kube-api-access-vtzjl" (OuterVolumeSpecName: "kube-api-access-vtzjl") pod "ff6ea425-8f08-4513-a444-ff524369c066" (UID: "ff6ea425-8f08-4513-a444-ff524369c066"). InnerVolumeSpecName "kube-api-access-vtzjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:53:18 crc kubenswrapper[5043]: I1125 07:53:18.517862 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-ceph" (OuterVolumeSpecName: "ceph") pod "ff6ea425-8f08-4513-a444-ff524369c066" (UID: "ff6ea425-8f08-4513-a444-ff524369c066"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:53:18 crc kubenswrapper[5043]: I1125 07:53:18.541925 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-inventory" (OuterVolumeSpecName: "inventory") pod "ff6ea425-8f08-4513-a444-ff524369c066" (UID: "ff6ea425-8f08-4513-a444-ff524369c066"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:53:18 crc kubenswrapper[5043]: I1125 07:53:18.548361 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ff6ea425-8f08-4513-a444-ff524369c066" (UID: "ff6ea425-8f08-4513-a444-ff524369c066"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:53:18 crc kubenswrapper[5043]: I1125 07:53:18.607513 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:53:18 crc kubenswrapper[5043]: I1125 07:53:18.607542 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 07:53:18 crc kubenswrapper[5043]: I1125 07:53:18.607552 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtzjl\" (UniqueName: \"kubernetes.io/projected/ff6ea425-8f08-4513-a444-ff524369c066-kube-api-access-vtzjl\") on node \"crc\" DevicePath \"\"" Nov 25 07:53:18 crc kubenswrapper[5043]: I1125 07:53:18.607565 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:53:18 crc kubenswrapper[5043]: I1125 07:53:18.607574 5043 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff6ea425-8f08-4513-a444-ff524369c066-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.022434 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.022422 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5" event={"ID":"ff6ea425-8f08-4513-a444-ff524369c066","Type":"ContainerDied","Data":"945a5dc161eb14256bcef1973e6844141b94ecb6935a598edc6d56ea80d25838"} Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.022525 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="945a5dc161eb14256bcef1973e6844141b94ecb6935a598edc6d56ea80d25838" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.111200 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq"] Nov 25 07:53:19 crc kubenswrapper[5043]: E1125 07:53:19.111796 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff6ea425-8f08-4513-a444-ff524369c066" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.111823 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff6ea425-8f08-4513-a444-ff524369c066" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.112103 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff6ea425-8f08-4513-a444-ff524369c066" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.112856 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.116226 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.116845 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.118389 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.119831 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.126665 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.136641 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq"] Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.229586 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.229769 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.229821 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l86t\" (UniqueName: \"kubernetes.io/projected/cd5db81c-0d4f-4c55-9539-203619adfac7-kube-api-access-8l86t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.229856 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.230174 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.332710 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.332912 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.333007 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.333066 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l86t\" (UniqueName: \"kubernetes.io/projected/cd5db81c-0d4f-4c55-9539-203619adfac7-kube-api-access-8l86t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.333118 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.337164 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.337326 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.340086 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.355431 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.356235 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l86t\" (UniqueName: \"kubernetes.io/projected/cd5db81c-0d4f-4c55-9539-203619adfac7-kube-api-access-8l86t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:53:19 crc kubenswrapper[5043]: I1125 07:53:19.442553 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:53:20 crc kubenswrapper[5043]: I1125 07:53:20.026434 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq"] Nov 25 07:53:21 crc kubenswrapper[5043]: I1125 07:53:21.047179 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" event={"ID":"cd5db81c-0d4f-4c55-9539-203619adfac7","Type":"ContainerStarted","Data":"ae487c7a6310fe0cbf5e693db4f51627ebf6464b558f963aec60762fc8b4e211"} Nov 25 07:53:21 crc kubenswrapper[5043]: I1125 07:53:21.047595 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" event={"ID":"cd5db81c-0d4f-4c55-9539-203619adfac7","Type":"ContainerStarted","Data":"ddbc5ece94a08ef27714439e1e200256e86b327e8807b7adbed111cb16f425f3"} Nov 25 07:53:21 crc kubenswrapper[5043]: I1125 07:53:21.084863 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" podStartSLOduration=1.651123093 podStartE2EDuration="2.084839658s" podCreationTimestamp="2025-11-25 07:53:19 +0000 UTC" firstStartedPulling="2025-11-25 07:53:20.040372303 +0000 UTC m=+2264.208568034" lastFinishedPulling="2025-11-25 07:53:20.474088868 +0000 UTC m=+2264.642284599" observedRunningTime="2025-11-25 07:53:21.079165747 +0000 UTC m=+2265.247361508" watchObservedRunningTime="2025-11-25 07:53:21.084839658 +0000 UTC m=+2265.253035419" Nov 25 07:53:25 crc kubenswrapper[5043]: I1125 07:53:25.963258 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:53:25 crc kubenswrapper[5043]: E1125 07:53:25.964262 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:53:37 crc kubenswrapper[5043]: I1125 07:53:37.963069 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:53:37 crc kubenswrapper[5043]: E1125 07:53:37.963938 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:53:51 crc kubenswrapper[5043]: I1125 07:53:51.252075 5043 scope.go:117] "RemoveContainer" containerID="bc6a1e06b7e52f7c8a8aa67f1debf32b0f1cc3498ea73ecb450eb9e4e84d7bae" Nov 25 07:53:51 crc kubenswrapper[5043]: I1125 07:53:51.325943 5043 scope.go:117] "RemoveContainer" containerID="fc34be8edc0c22997dd77919dd55c12f5e6b326beda52beaf63f27f05dfe1d8f" Nov 25 07:53:51 crc kubenswrapper[5043]: I1125 07:53:51.383034 5043 scope.go:117] "RemoveContainer" containerID="e667625672f069025ce62e7d00149b2a84d3f21c2af88a2c53df2a34408c0dd0" Nov 25 07:53:51 crc kubenswrapper[5043]: I1125 07:53:51.455638 5043 scope.go:117] "RemoveContainer" containerID="f78f09763dfa815ed6cc59ab6882de2bf91e0f2df8f2b50b03d2c4a9ac7b1fb1" Nov 25 07:53:51 crc kubenswrapper[5043]: I1125 07:53:51.486123 5043 scope.go:117] "RemoveContainer" containerID="1447dacc23de9a738a86c1b92174489f127d097a238a6f41b4de3d16bfa6029e" Nov 25 07:53:51 crc kubenswrapper[5043]: I1125 07:53:51.522435 5043 scope.go:117] "RemoveContainer" containerID="512b8c9486348df649743ee995641b8272a8998df958db3475d0edd50e56083b" Nov 25 07:53:51 crc kubenswrapper[5043]: I1125 07:53:51.565742 5043 scope.go:117] "RemoveContainer" containerID="56ef6b16e39df5346115fc43376eaf2c8010f4b20311e6011cc3571ec68a40e2" Nov 25 07:53:51 crc kubenswrapper[5043]: I1125 07:53:51.622033 5043 scope.go:117] "RemoveContainer" containerID="0b22b8f8531151638a715eb9062387b749352386849c8f1d64c63289145bb41d" Nov 25 07:53:51 crc kubenswrapper[5043]: I1125 07:53:51.669289 5043 scope.go:117] "RemoveContainer" containerID="bc4918983ee0267ed600f573a9ea3430b7069c86cef3cc741a6ae0ebd8ea0334" Nov 25 07:53:51 crc kubenswrapper[5043]: I1125 07:53:51.726446 5043 scope.go:117] "RemoveContainer" containerID="25d7d8e91374fd22998207cb0665713bee8cccbc6a575fe7d0cda99aebe0a169" Nov 25 07:53:52 crc kubenswrapper[5043]: I1125 07:53:52.962706 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:53:52 crc kubenswrapper[5043]: E1125 07:53:52.963327 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:54:04 crc kubenswrapper[5043]: I1125 07:54:04.963402 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:54:04 crc kubenswrapper[5043]: E1125 07:54:04.964139 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:54:16 crc kubenswrapper[5043]: I1125 07:54:16.972369 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:54:16 crc kubenswrapper[5043]: E1125 07:54:16.973205 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:54:30 crc kubenswrapper[5043]: I1125 07:54:30.963117 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:54:30 crc kubenswrapper[5043]: E1125 07:54:30.963933 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:54:43 crc kubenswrapper[5043]: I1125 07:54:43.962491 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:54:43 crc kubenswrapper[5043]: E1125 07:54:43.963149 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:54:54 crc kubenswrapper[5043]: I1125 07:54:54.962419 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:54:54 crc kubenswrapper[5043]: E1125 07:54:54.964449 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:55:09 crc kubenswrapper[5043]: I1125 07:55:09.963596 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:55:09 crc kubenswrapper[5043]: E1125 07:55:09.965076 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:55:18 crc kubenswrapper[5043]: I1125 07:55:18.755762 5043 generic.go:334] "Generic (PLEG): container finished" podID="cd5db81c-0d4f-4c55-9539-203619adfac7" containerID="ae487c7a6310fe0cbf5e693db4f51627ebf6464b558f963aec60762fc8b4e211" exitCode=0 Nov 25 07:55:18 crc kubenswrapper[5043]: I1125 07:55:18.755970 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" event={"ID":"cd5db81c-0d4f-4c55-9539-203619adfac7","Type":"ContainerDied","Data":"ae487c7a6310fe0cbf5e693db4f51627ebf6464b558f963aec60762fc8b4e211"} Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.260063 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.391425 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-ssh-key\") pod \"cd5db81c-0d4f-4c55-9539-203619adfac7\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.391480 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-inventory\") pod \"cd5db81c-0d4f-4c55-9539-203619adfac7\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.391547 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-bootstrap-combined-ca-bundle\") pod \"cd5db81c-0d4f-4c55-9539-203619adfac7\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.391641 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l86t\" (UniqueName: \"kubernetes.io/projected/cd5db81c-0d4f-4c55-9539-203619adfac7-kube-api-access-8l86t\") pod \"cd5db81c-0d4f-4c55-9539-203619adfac7\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.391696 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-ceph\") pod \"cd5db81c-0d4f-4c55-9539-203619adfac7\" (UID: \"cd5db81c-0d4f-4c55-9539-203619adfac7\") " Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.397785 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "cd5db81c-0d4f-4c55-9539-203619adfac7" (UID: "cd5db81c-0d4f-4c55-9539-203619adfac7"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.397845 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-ceph" (OuterVolumeSpecName: "ceph") pod "cd5db81c-0d4f-4c55-9539-203619adfac7" (UID: "cd5db81c-0d4f-4c55-9539-203619adfac7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.397915 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd5db81c-0d4f-4c55-9539-203619adfac7-kube-api-access-8l86t" (OuterVolumeSpecName: "kube-api-access-8l86t") pod "cd5db81c-0d4f-4c55-9539-203619adfac7" (UID: "cd5db81c-0d4f-4c55-9539-203619adfac7"). InnerVolumeSpecName "kube-api-access-8l86t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.417306 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-inventory" (OuterVolumeSpecName: "inventory") pod "cd5db81c-0d4f-4c55-9539-203619adfac7" (UID: "cd5db81c-0d4f-4c55-9539-203619adfac7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.448303 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cd5db81c-0d4f-4c55-9539-203619adfac7" (UID: "cd5db81c-0d4f-4c55-9539-203619adfac7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.495075 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.495111 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.495128 5043 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.495142 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l86t\" (UniqueName: \"kubernetes.io/projected/cd5db81c-0d4f-4c55-9539-203619adfac7-kube-api-access-8l86t\") on node \"crc\" DevicePath \"\"" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.495154 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd5db81c-0d4f-4c55-9539-203619adfac7-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.773195 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" event={"ID":"cd5db81c-0d4f-4c55-9539-203619adfac7","Type":"ContainerDied","Data":"ddbc5ece94a08ef27714439e1e200256e86b327e8807b7adbed111cb16f425f3"} Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.773235 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddbc5ece94a08ef27714439e1e200256e86b327e8807b7adbed111cb16f425f3" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.773243 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.847119 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8"] Nov 25 07:55:20 crc kubenswrapper[5043]: E1125 07:55:20.847766 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5db81c-0d4f-4c55-9539-203619adfac7" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.847784 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5db81c-0d4f-4c55-9539-203619adfac7" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.848041 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5db81c-0d4f-4c55-9539-203619adfac7" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.849570 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.852209 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.852435 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.852598 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.852918 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.859180 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8"] Nov 25 07:55:20 crc kubenswrapper[5043]: I1125 07:55:20.859718 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:55:21 crc kubenswrapper[5043]: I1125 07:55:21.004035 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8\" (UID: \"5c2779ae-e706-494e-9b9a-155774a61d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" Nov 25 07:55:21 crc kubenswrapper[5043]: I1125 07:55:21.004175 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp7ck\" (UniqueName: \"kubernetes.io/projected/5c2779ae-e706-494e-9b9a-155774a61d31-kube-api-access-bp7ck\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8\" (UID: \"5c2779ae-e706-494e-9b9a-155774a61d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" Nov 25 07:55:21 crc kubenswrapper[5043]: I1125 07:55:21.004218 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8\" (UID: \"5c2779ae-e706-494e-9b9a-155774a61d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" Nov 25 07:55:21 crc kubenswrapper[5043]: I1125 07:55:21.004352 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8\" (UID: \"5c2779ae-e706-494e-9b9a-155774a61d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" Nov 25 07:55:21 crc kubenswrapper[5043]: I1125 07:55:21.105349 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8\" (UID: \"5c2779ae-e706-494e-9b9a-155774a61d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" Nov 25 07:55:21 crc kubenswrapper[5043]: I1125 07:55:21.105424 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8\" (UID: \"5c2779ae-e706-494e-9b9a-155774a61d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" Nov 25 07:55:21 crc kubenswrapper[5043]: I1125 07:55:21.105470 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp7ck\" (UniqueName: \"kubernetes.io/projected/5c2779ae-e706-494e-9b9a-155774a61d31-kube-api-access-bp7ck\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8\" (UID: \"5c2779ae-e706-494e-9b9a-155774a61d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" Nov 25 07:55:21 crc kubenswrapper[5043]: I1125 07:55:21.105490 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8\" (UID: \"5c2779ae-e706-494e-9b9a-155774a61d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" Nov 25 07:55:21 crc kubenswrapper[5043]: I1125 07:55:21.110320 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8\" (UID: \"5c2779ae-e706-494e-9b9a-155774a61d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" Nov 25 07:55:21 crc kubenswrapper[5043]: I1125 07:55:21.116542 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8\" (UID: \"5c2779ae-e706-494e-9b9a-155774a61d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" Nov 25 07:55:21 crc kubenswrapper[5043]: I1125 07:55:21.116575 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8\" (UID: \"5c2779ae-e706-494e-9b9a-155774a61d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" Nov 25 07:55:21 crc kubenswrapper[5043]: I1125 07:55:21.122025 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp7ck\" (UniqueName: \"kubernetes.io/projected/5c2779ae-e706-494e-9b9a-155774a61d31-kube-api-access-bp7ck\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8\" (UID: \"5c2779ae-e706-494e-9b9a-155774a61d31\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" Nov 25 07:55:21 crc kubenswrapper[5043]: I1125 07:55:21.169330 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" Nov 25 07:55:21 crc kubenswrapper[5043]: I1125 07:55:21.690526 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8"] Nov 25 07:55:21 crc kubenswrapper[5043]: I1125 07:55:21.697905 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 07:55:21 crc kubenswrapper[5043]: I1125 07:55:21.781895 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" event={"ID":"5c2779ae-e706-494e-9b9a-155774a61d31","Type":"ContainerStarted","Data":"b348c6c611956bafc551526be5405063c6a2a2dc2d73016853a2c45780029373"} Nov 25 07:55:22 crc kubenswrapper[5043]: I1125 07:55:22.793328 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" event={"ID":"5c2779ae-e706-494e-9b9a-155774a61d31","Type":"ContainerStarted","Data":"ff711e5961d8ccdd9890ee552ef1f94f3a220198340269a96434d1306e0bdc3a"} Nov 25 07:55:22 crc kubenswrapper[5043]: I1125 07:55:22.820979 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" podStartSLOduration=2.368909482 podStartE2EDuration="2.820957036s" podCreationTimestamp="2025-11-25 07:55:20 +0000 UTC" firstStartedPulling="2025-11-25 07:55:21.697665367 +0000 UTC m=+2385.865861088" lastFinishedPulling="2025-11-25 07:55:22.149712921 +0000 UTC m=+2386.317908642" observedRunningTime="2025-11-25 07:55:22.811686598 +0000 UTC m=+2386.979882339" watchObservedRunningTime="2025-11-25 07:55:22.820957036 +0000 UTC m=+2386.989152757" Nov 25 07:55:22 crc kubenswrapper[5043]: I1125 07:55:22.962596 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:55:22 crc kubenswrapper[5043]: E1125 07:55:22.962899 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:55:34 crc kubenswrapper[5043]: I1125 07:55:34.963141 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:55:34 crc kubenswrapper[5043]: E1125 07:55:34.964529 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:55:49 crc kubenswrapper[5043]: I1125 07:55:49.964032 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:55:49 crc kubenswrapper[5043]: E1125 07:55:49.965270 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:55:51 crc kubenswrapper[5043]: I1125 07:55:51.034566 5043 generic.go:334] "Generic (PLEG): container finished" podID="5c2779ae-e706-494e-9b9a-155774a61d31" containerID="ff711e5961d8ccdd9890ee552ef1f94f3a220198340269a96434d1306e0bdc3a" exitCode=0 Nov 25 07:55:51 crc kubenswrapper[5043]: I1125 07:55:51.034787 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" event={"ID":"5c2779ae-e706-494e-9b9a-155774a61d31","Type":"ContainerDied","Data":"ff711e5961d8ccdd9890ee552ef1f94f3a220198340269a96434d1306e0bdc3a"} Nov 25 07:55:52 crc kubenswrapper[5043]: I1125 07:55:52.489218 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" Nov 25 07:55:52 crc kubenswrapper[5043]: I1125 07:55:52.506678 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-ceph\") pod \"5c2779ae-e706-494e-9b9a-155774a61d31\" (UID: \"5c2779ae-e706-494e-9b9a-155774a61d31\") " Nov 25 07:55:52 crc kubenswrapper[5043]: I1125 07:55:52.506721 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-ssh-key\") pod \"5c2779ae-e706-494e-9b9a-155774a61d31\" (UID: \"5c2779ae-e706-494e-9b9a-155774a61d31\") " Nov 25 07:55:52 crc kubenswrapper[5043]: I1125 07:55:52.506847 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp7ck\" (UniqueName: \"kubernetes.io/projected/5c2779ae-e706-494e-9b9a-155774a61d31-kube-api-access-bp7ck\") pod \"5c2779ae-e706-494e-9b9a-155774a61d31\" (UID: \"5c2779ae-e706-494e-9b9a-155774a61d31\") " Nov 25 07:55:52 crc kubenswrapper[5043]: I1125 07:55:52.506907 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-inventory\") pod \"5c2779ae-e706-494e-9b9a-155774a61d31\" (UID: \"5c2779ae-e706-494e-9b9a-155774a61d31\") " Nov 25 07:55:52 crc kubenswrapper[5043]: I1125 07:55:52.512731 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c2779ae-e706-494e-9b9a-155774a61d31-kube-api-access-bp7ck" (OuterVolumeSpecName: "kube-api-access-bp7ck") pod "5c2779ae-e706-494e-9b9a-155774a61d31" (UID: "5c2779ae-e706-494e-9b9a-155774a61d31"). InnerVolumeSpecName "kube-api-access-bp7ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:55:52 crc kubenswrapper[5043]: I1125 07:55:52.514061 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-ceph" (OuterVolumeSpecName: "ceph") pod "5c2779ae-e706-494e-9b9a-155774a61d31" (UID: "5c2779ae-e706-494e-9b9a-155774a61d31"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:55:52 crc kubenswrapper[5043]: I1125 07:55:52.551837 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5c2779ae-e706-494e-9b9a-155774a61d31" (UID: "5c2779ae-e706-494e-9b9a-155774a61d31"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:55:52 crc kubenswrapper[5043]: I1125 07:55:52.552818 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-inventory" (OuterVolumeSpecName: "inventory") pod "5c2779ae-e706-494e-9b9a-155774a61d31" (UID: "5c2779ae-e706-494e-9b9a-155774a61d31"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:55:52 crc kubenswrapper[5043]: I1125 07:55:52.610655 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp7ck\" (UniqueName: \"kubernetes.io/projected/5c2779ae-e706-494e-9b9a-155774a61d31-kube-api-access-bp7ck\") on node \"crc\" DevicePath \"\"" Nov 25 07:55:52 crc kubenswrapper[5043]: I1125 07:55:52.610691 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:55:52 crc kubenswrapper[5043]: I1125 07:55:52.610701 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 07:55:52 crc kubenswrapper[5043]: I1125 07:55:52.610709 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c2779ae-e706-494e-9b9a-155774a61d31-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.052343 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" event={"ID":"5c2779ae-e706-494e-9b9a-155774a61d31","Type":"ContainerDied","Data":"b348c6c611956bafc551526be5405063c6a2a2dc2d73016853a2c45780029373"} Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.052384 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b348c6c611956bafc551526be5405063c6a2a2dc2d73016853a2c45780029373" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.052447 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.209977 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5"] Nov 25 07:55:53 crc kubenswrapper[5043]: E1125 07:55:53.210322 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2779ae-e706-494e-9b9a-155774a61d31" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.210339 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2779ae-e706-494e-9b9a-155774a61d31" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.210501 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c2779ae-e706-494e-9b9a-155774a61d31" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.211072 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.213457 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.213541 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.213941 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.214194 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.214394 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.221447 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5\" (UID: \"f22dd652-56fe-432d-a66f-806586c1c352\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.221626 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lssdz\" (UniqueName: \"kubernetes.io/projected/f22dd652-56fe-432d-a66f-806586c1c352-kube-api-access-lssdz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5\" (UID: \"f22dd652-56fe-432d-a66f-806586c1c352\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.221818 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5\" (UID: \"f22dd652-56fe-432d-a66f-806586c1c352\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.221884 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5\" (UID: \"f22dd652-56fe-432d-a66f-806586c1c352\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.227143 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5"] Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.324001 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lssdz\" (UniqueName: \"kubernetes.io/projected/f22dd652-56fe-432d-a66f-806586c1c352-kube-api-access-lssdz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5\" (UID: \"f22dd652-56fe-432d-a66f-806586c1c352\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.324225 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5\" (UID: \"f22dd652-56fe-432d-a66f-806586c1c352\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.324280 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5\" (UID: \"f22dd652-56fe-432d-a66f-806586c1c352\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.324335 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5\" (UID: \"f22dd652-56fe-432d-a66f-806586c1c352\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.328980 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5\" (UID: \"f22dd652-56fe-432d-a66f-806586c1c352\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.328991 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5\" (UID: \"f22dd652-56fe-432d-a66f-806586c1c352\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.330144 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5\" (UID: \"f22dd652-56fe-432d-a66f-806586c1c352\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.349033 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lssdz\" (UniqueName: \"kubernetes.io/projected/f22dd652-56fe-432d-a66f-806586c1c352-kube-api-access-lssdz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5\" (UID: \"f22dd652-56fe-432d-a66f-806586c1c352\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" Nov 25 07:55:53 crc kubenswrapper[5043]: I1125 07:55:53.530802 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" Nov 25 07:55:54 crc kubenswrapper[5043]: I1125 07:55:54.021276 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5"] Nov 25 07:55:54 crc kubenswrapper[5043]: I1125 07:55:54.060399 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" event={"ID":"f22dd652-56fe-432d-a66f-806586c1c352","Type":"ContainerStarted","Data":"059df331d014a13c1d12c3ac115e00ac6421a08a5f24f60f1fae1a5a8df06647"} Nov 25 07:55:55 crc kubenswrapper[5043]: I1125 07:55:55.069873 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" event={"ID":"f22dd652-56fe-432d-a66f-806586c1c352","Type":"ContainerStarted","Data":"0f4ac69bc4737bfbfb473118ce8e18f3765e828793087d30aa3df6d71a9aceb2"} Nov 25 07:55:55 crc kubenswrapper[5043]: I1125 07:55:55.086963 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" podStartSLOduration=1.658270194 podStartE2EDuration="2.086943908s" podCreationTimestamp="2025-11-25 07:55:53 +0000 UTC" firstStartedPulling="2025-11-25 07:55:54.036367449 +0000 UTC m=+2418.204563170" lastFinishedPulling="2025-11-25 07:55:54.465041163 +0000 UTC m=+2418.633236884" observedRunningTime="2025-11-25 07:55:55.08326459 +0000 UTC m=+2419.251460321" watchObservedRunningTime="2025-11-25 07:55:55.086943908 +0000 UTC m=+2419.255139629" Nov 25 07:56:00 crc kubenswrapper[5043]: I1125 07:56:00.111178 5043 generic.go:334] "Generic (PLEG): container finished" podID="f22dd652-56fe-432d-a66f-806586c1c352" containerID="0f4ac69bc4737bfbfb473118ce8e18f3765e828793087d30aa3df6d71a9aceb2" exitCode=0 Nov 25 07:56:00 crc kubenswrapper[5043]: I1125 07:56:00.111268 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" event={"ID":"f22dd652-56fe-432d-a66f-806586c1c352","Type":"ContainerDied","Data":"0f4ac69bc4737bfbfb473118ce8e18f3765e828793087d30aa3df6d71a9aceb2"} Nov 25 07:56:01 crc kubenswrapper[5043]: I1125 07:56:01.551141 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" Nov 25 07:56:01 crc kubenswrapper[5043]: I1125 07:56:01.613339 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-ceph\") pod \"f22dd652-56fe-432d-a66f-806586c1c352\" (UID: \"f22dd652-56fe-432d-a66f-806586c1c352\") " Nov 25 07:56:01 crc kubenswrapper[5043]: I1125 07:56:01.613428 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lssdz\" (UniqueName: \"kubernetes.io/projected/f22dd652-56fe-432d-a66f-806586c1c352-kube-api-access-lssdz\") pod \"f22dd652-56fe-432d-a66f-806586c1c352\" (UID: \"f22dd652-56fe-432d-a66f-806586c1c352\") " Nov 25 07:56:01 crc kubenswrapper[5043]: I1125 07:56:01.613500 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-ssh-key\") pod \"f22dd652-56fe-432d-a66f-806586c1c352\" (UID: \"f22dd652-56fe-432d-a66f-806586c1c352\") " Nov 25 07:56:01 crc kubenswrapper[5043]: I1125 07:56:01.613591 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-inventory\") pod \"f22dd652-56fe-432d-a66f-806586c1c352\" (UID: \"f22dd652-56fe-432d-a66f-806586c1c352\") " Nov 25 07:56:01 crc kubenswrapper[5043]: I1125 07:56:01.621178 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-ceph" (OuterVolumeSpecName: "ceph") pod "f22dd652-56fe-432d-a66f-806586c1c352" (UID: "f22dd652-56fe-432d-a66f-806586c1c352"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:56:01 crc kubenswrapper[5043]: I1125 07:56:01.621242 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f22dd652-56fe-432d-a66f-806586c1c352-kube-api-access-lssdz" (OuterVolumeSpecName: "kube-api-access-lssdz") pod "f22dd652-56fe-432d-a66f-806586c1c352" (UID: "f22dd652-56fe-432d-a66f-806586c1c352"). InnerVolumeSpecName "kube-api-access-lssdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:56:01 crc kubenswrapper[5043]: I1125 07:56:01.648810 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f22dd652-56fe-432d-a66f-806586c1c352" (UID: "f22dd652-56fe-432d-a66f-806586c1c352"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:56:01 crc kubenswrapper[5043]: I1125 07:56:01.649138 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-inventory" (OuterVolumeSpecName: "inventory") pod "f22dd652-56fe-432d-a66f-806586c1c352" (UID: "f22dd652-56fe-432d-a66f-806586c1c352"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:56:01 crc kubenswrapper[5043]: I1125 07:56:01.715992 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 07:56:01 crc kubenswrapper[5043]: I1125 07:56:01.716036 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lssdz\" (UniqueName: \"kubernetes.io/projected/f22dd652-56fe-432d-a66f-806586c1c352-kube-api-access-lssdz\") on node \"crc\" DevicePath \"\"" Nov 25 07:56:01 crc kubenswrapper[5043]: I1125 07:56:01.716051 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:56:01 crc kubenswrapper[5043]: I1125 07:56:01.716063 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f22dd652-56fe-432d-a66f-806586c1c352-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.152699 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" event={"ID":"f22dd652-56fe-432d-a66f-806586c1c352","Type":"ContainerDied","Data":"059df331d014a13c1d12c3ac115e00ac6421a08a5f24f60f1fae1a5a8df06647"} Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.152779 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="059df331d014a13c1d12c3ac115e00ac6421a08a5f24f60f1fae1a5a8df06647" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.152905 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.218293 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v"] Nov 25 07:56:02 crc kubenswrapper[5043]: E1125 07:56:02.218669 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22dd652-56fe-432d-a66f-806586c1c352" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.218695 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22dd652-56fe-432d-a66f-806586c1c352" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.218989 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="f22dd652-56fe-432d-a66f-806586c1c352" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.219708 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.223152 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.223220 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.223337 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.224574 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.224685 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.225269 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v"] Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.325499 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zp24v\" (UID: \"218f287e-331e-49cc-8099-2791fb43a2ac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.325565 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zp24v\" (UID: \"218f287e-331e-49cc-8099-2791fb43a2ac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.325596 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zp24v\" (UID: \"218f287e-331e-49cc-8099-2791fb43a2ac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.325945 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf6l8\" (UniqueName: \"kubernetes.io/projected/218f287e-331e-49cc-8099-2791fb43a2ac-kube-api-access-mf6l8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zp24v\" (UID: \"218f287e-331e-49cc-8099-2791fb43a2ac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.427577 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zp24v\" (UID: \"218f287e-331e-49cc-8099-2791fb43a2ac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.427674 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zp24v\" (UID: \"218f287e-331e-49cc-8099-2791fb43a2ac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.427709 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zp24v\" (UID: \"218f287e-331e-49cc-8099-2791fb43a2ac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.427836 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf6l8\" (UniqueName: \"kubernetes.io/projected/218f287e-331e-49cc-8099-2791fb43a2ac-kube-api-access-mf6l8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zp24v\" (UID: \"218f287e-331e-49cc-8099-2791fb43a2ac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.432572 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zp24v\" (UID: \"218f287e-331e-49cc-8099-2791fb43a2ac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.433521 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zp24v\" (UID: \"218f287e-331e-49cc-8099-2791fb43a2ac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.433933 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zp24v\" (UID: \"218f287e-331e-49cc-8099-2791fb43a2ac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.459057 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf6l8\" (UniqueName: \"kubernetes.io/projected/218f287e-331e-49cc-8099-2791fb43a2ac-kube-api-access-mf6l8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zp24v\" (UID: \"218f287e-331e-49cc-8099-2791fb43a2ac\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" Nov 25 07:56:02 crc kubenswrapper[5043]: I1125 07:56:02.544046 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" Nov 25 07:56:03 crc kubenswrapper[5043]: I1125 07:56:03.090972 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v"] Nov 25 07:56:03 crc kubenswrapper[5043]: I1125 07:56:03.162153 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" event={"ID":"218f287e-331e-49cc-8099-2791fb43a2ac","Type":"ContainerStarted","Data":"52a55c3a99383de657421c2e2f48d99a93b01f3bf6bc5cab45e81ee1d241d4d5"} Nov 25 07:56:04 crc kubenswrapper[5043]: I1125 07:56:04.173102 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" event={"ID":"218f287e-331e-49cc-8099-2791fb43a2ac","Type":"ContainerStarted","Data":"be9b1b4d4ed6c3747a83efb11b627943e883d6161c8207d4169fcddafc6c78ec"} Nov 25 07:56:04 crc kubenswrapper[5043]: I1125 07:56:04.205327 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" podStartSLOduration=1.643050128 podStartE2EDuration="2.20529976s" podCreationTimestamp="2025-11-25 07:56:02 +0000 UTC" firstStartedPulling="2025-11-25 07:56:03.095872419 +0000 UTC m=+2427.264068140" lastFinishedPulling="2025-11-25 07:56:03.658122041 +0000 UTC m=+2427.826317772" observedRunningTime="2025-11-25 07:56:04.194992003 +0000 UTC m=+2428.363187754" watchObservedRunningTime="2025-11-25 07:56:04.20529976 +0000 UTC m=+2428.373495511" Nov 25 07:56:04 crc kubenswrapper[5043]: I1125 07:56:04.963233 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:56:04 crc kubenswrapper[5043]: E1125 07:56:04.963772 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:56:16 crc kubenswrapper[5043]: I1125 07:56:16.969512 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:56:16 crc kubenswrapper[5043]: E1125 07:56:16.970787 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:56:28 crc kubenswrapper[5043]: I1125 07:56:28.962685 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:56:28 crc kubenswrapper[5043]: E1125 07:56:28.963632 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:56:41 crc kubenswrapper[5043]: I1125 07:56:41.547075 5043 generic.go:334] "Generic (PLEG): container finished" podID="218f287e-331e-49cc-8099-2791fb43a2ac" containerID="be9b1b4d4ed6c3747a83efb11b627943e883d6161c8207d4169fcddafc6c78ec" exitCode=0 Nov 25 07:56:41 crc kubenswrapper[5043]: I1125 07:56:41.547208 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" event={"ID":"218f287e-331e-49cc-8099-2791fb43a2ac","Type":"ContainerDied","Data":"be9b1b4d4ed6c3747a83efb11b627943e883d6161c8207d4169fcddafc6c78ec"} Nov 25 07:56:41 crc kubenswrapper[5043]: I1125 07:56:41.963790 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:56:41 crc kubenswrapper[5043]: E1125 07:56:41.964285 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:56:42 crc kubenswrapper[5043]: I1125 07:56:42.974693 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.031057 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-ceph\") pod \"218f287e-331e-49cc-8099-2791fb43a2ac\" (UID: \"218f287e-331e-49cc-8099-2791fb43a2ac\") " Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.031234 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-inventory\") pod \"218f287e-331e-49cc-8099-2791fb43a2ac\" (UID: \"218f287e-331e-49cc-8099-2791fb43a2ac\") " Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.031357 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf6l8\" (UniqueName: \"kubernetes.io/projected/218f287e-331e-49cc-8099-2791fb43a2ac-kube-api-access-mf6l8\") pod \"218f287e-331e-49cc-8099-2791fb43a2ac\" (UID: \"218f287e-331e-49cc-8099-2791fb43a2ac\") " Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.032175 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-ssh-key\") pod \"218f287e-331e-49cc-8099-2791fb43a2ac\" (UID: \"218f287e-331e-49cc-8099-2791fb43a2ac\") " Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.036055 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-ceph" (OuterVolumeSpecName: "ceph") pod "218f287e-331e-49cc-8099-2791fb43a2ac" (UID: "218f287e-331e-49cc-8099-2791fb43a2ac"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.037298 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/218f287e-331e-49cc-8099-2791fb43a2ac-kube-api-access-mf6l8" (OuterVolumeSpecName: "kube-api-access-mf6l8") pod "218f287e-331e-49cc-8099-2791fb43a2ac" (UID: "218f287e-331e-49cc-8099-2791fb43a2ac"). InnerVolumeSpecName "kube-api-access-mf6l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.055753 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "218f287e-331e-49cc-8099-2791fb43a2ac" (UID: "218f287e-331e-49cc-8099-2791fb43a2ac"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.060114 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-inventory" (OuterVolumeSpecName: "inventory") pod "218f287e-331e-49cc-8099-2791fb43a2ac" (UID: "218f287e-331e-49cc-8099-2791fb43a2ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.134119 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.134144 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.134155 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf6l8\" (UniqueName: \"kubernetes.io/projected/218f287e-331e-49cc-8099-2791fb43a2ac-kube-api-access-mf6l8\") on node \"crc\" DevicePath \"\"" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.134164 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/218f287e-331e-49cc-8099-2791fb43a2ac-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.566766 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" event={"ID":"218f287e-331e-49cc-8099-2791fb43a2ac","Type":"ContainerDied","Data":"52a55c3a99383de657421c2e2f48d99a93b01f3bf6bc5cab45e81ee1d241d4d5"} Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.567244 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52a55c3a99383de657421c2e2f48d99a93b01f3bf6bc5cab45e81ee1d241d4d5" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.566831 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zp24v" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.672693 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm"] Nov 25 07:56:43 crc kubenswrapper[5043]: E1125 07:56:43.673095 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="218f287e-331e-49cc-8099-2791fb43a2ac" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.673116 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="218f287e-331e-49cc-8099-2791fb43a2ac" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.673365 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="218f287e-331e-49cc-8099-2791fb43a2ac" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.676338 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.680449 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.680648 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.680782 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.680890 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.687217 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm"] Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.690880 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.743359 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm\" (UID: \"c283f059-0d72-42e3-bce6-cfdab8692e63\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.743499 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm\" (UID: \"c283f059-0d72-42e3-bce6-cfdab8692e63\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.743595 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm\" (UID: \"c283f059-0d72-42e3-bce6-cfdab8692e63\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.743684 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ppj2\" (UniqueName: \"kubernetes.io/projected/c283f059-0d72-42e3-bce6-cfdab8692e63-kube-api-access-6ppj2\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm\" (UID: \"c283f059-0d72-42e3-bce6-cfdab8692e63\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.844852 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm\" (UID: \"c283f059-0d72-42e3-bce6-cfdab8692e63\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.844921 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ppj2\" (UniqueName: \"kubernetes.io/projected/c283f059-0d72-42e3-bce6-cfdab8692e63-kube-api-access-6ppj2\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm\" (UID: \"c283f059-0d72-42e3-bce6-cfdab8692e63\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.845365 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm\" (UID: \"c283f059-0d72-42e3-bce6-cfdab8692e63\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.846074 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm\" (UID: \"c283f059-0d72-42e3-bce6-cfdab8692e63\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.851348 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm\" (UID: \"c283f059-0d72-42e3-bce6-cfdab8692e63\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.857557 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm\" (UID: \"c283f059-0d72-42e3-bce6-cfdab8692e63\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.862310 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm\" (UID: \"c283f059-0d72-42e3-bce6-cfdab8692e63\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" Nov 25 07:56:43 crc kubenswrapper[5043]: I1125 07:56:43.867939 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ppj2\" (UniqueName: \"kubernetes.io/projected/c283f059-0d72-42e3-bce6-cfdab8692e63-kube-api-access-6ppj2\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm\" (UID: \"c283f059-0d72-42e3-bce6-cfdab8692e63\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" Nov 25 07:56:44 crc kubenswrapper[5043]: I1125 07:56:44.000784 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" Nov 25 07:56:44 crc kubenswrapper[5043]: I1125 07:56:44.491821 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm"] Nov 25 07:56:44 crc kubenswrapper[5043]: I1125 07:56:44.583210 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" event={"ID":"c283f059-0d72-42e3-bce6-cfdab8692e63","Type":"ContainerStarted","Data":"6648c7db6da04220963c024ca855a526853afc84f9af959e6c3acae93b7ca8f6"} Nov 25 07:56:45 crc kubenswrapper[5043]: I1125 07:56:45.595163 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" event={"ID":"c283f059-0d72-42e3-bce6-cfdab8692e63","Type":"ContainerStarted","Data":"dd22f1ceb3383176df52ddc9a7038a2d98a54cad4e83fc687fdeea1850a71a4e"} Nov 25 07:56:45 crc kubenswrapper[5043]: I1125 07:56:45.622566 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" podStartSLOduration=2.116849907 podStartE2EDuration="2.622544911s" podCreationTimestamp="2025-11-25 07:56:43 +0000 UTC" firstStartedPulling="2025-11-25 07:56:44.506439301 +0000 UTC m=+2468.674635022" lastFinishedPulling="2025-11-25 07:56:45.012134295 +0000 UTC m=+2469.180330026" observedRunningTime="2025-11-25 07:56:45.617854645 +0000 UTC m=+2469.786050416" watchObservedRunningTime="2025-11-25 07:56:45.622544911 +0000 UTC m=+2469.790740642" Nov 25 07:56:49 crc kubenswrapper[5043]: I1125 07:56:49.631515 5043 generic.go:334] "Generic (PLEG): container finished" podID="c283f059-0d72-42e3-bce6-cfdab8692e63" containerID="dd22f1ceb3383176df52ddc9a7038a2d98a54cad4e83fc687fdeea1850a71a4e" exitCode=0 Nov 25 07:56:49 crc kubenswrapper[5043]: I1125 07:56:49.631599 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" event={"ID":"c283f059-0d72-42e3-bce6-cfdab8692e63","Type":"ContainerDied","Data":"dd22f1ceb3383176df52ddc9a7038a2d98a54cad4e83fc687fdeea1850a71a4e"} Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.028781 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.097544 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-inventory\") pod \"c283f059-0d72-42e3-bce6-cfdab8692e63\" (UID: \"c283f059-0d72-42e3-bce6-cfdab8692e63\") " Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.097576 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-ssh-key\") pod \"c283f059-0d72-42e3-bce6-cfdab8692e63\" (UID: \"c283f059-0d72-42e3-bce6-cfdab8692e63\") " Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.097640 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-ceph\") pod \"c283f059-0d72-42e3-bce6-cfdab8692e63\" (UID: \"c283f059-0d72-42e3-bce6-cfdab8692e63\") " Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.097686 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ppj2\" (UniqueName: \"kubernetes.io/projected/c283f059-0d72-42e3-bce6-cfdab8692e63-kube-api-access-6ppj2\") pod \"c283f059-0d72-42e3-bce6-cfdab8692e63\" (UID: \"c283f059-0d72-42e3-bce6-cfdab8692e63\") " Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.102985 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-ceph" (OuterVolumeSpecName: "ceph") pod "c283f059-0d72-42e3-bce6-cfdab8692e63" (UID: "c283f059-0d72-42e3-bce6-cfdab8692e63"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.103136 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c283f059-0d72-42e3-bce6-cfdab8692e63-kube-api-access-6ppj2" (OuterVolumeSpecName: "kube-api-access-6ppj2") pod "c283f059-0d72-42e3-bce6-cfdab8692e63" (UID: "c283f059-0d72-42e3-bce6-cfdab8692e63"). InnerVolumeSpecName "kube-api-access-6ppj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.121899 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c283f059-0d72-42e3-bce6-cfdab8692e63" (UID: "c283f059-0d72-42e3-bce6-cfdab8692e63"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.124251 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-inventory" (OuterVolumeSpecName: "inventory") pod "c283f059-0d72-42e3-bce6-cfdab8692e63" (UID: "c283f059-0d72-42e3-bce6-cfdab8692e63"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.199578 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ppj2\" (UniqueName: \"kubernetes.io/projected/c283f059-0d72-42e3-bce6-cfdab8692e63-kube-api-access-6ppj2\") on node \"crc\" DevicePath \"\"" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.199629 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.199638 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.199647 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c283f059-0d72-42e3-bce6-cfdab8692e63-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.653137 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" event={"ID":"c283f059-0d72-42e3-bce6-cfdab8692e63","Type":"ContainerDied","Data":"6648c7db6da04220963c024ca855a526853afc84f9af959e6c3acae93b7ca8f6"} Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.653209 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6648c7db6da04220963c024ca855a526853afc84f9af959e6c3acae93b7ca8f6" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.653479 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.749971 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9"] Nov 25 07:56:51 crc kubenswrapper[5043]: E1125 07:56:51.750492 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c283f059-0d72-42e3-bce6-cfdab8692e63" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.750517 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c283f059-0d72-42e3-bce6-cfdab8692e63" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.750749 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="c283f059-0d72-42e3-bce6-cfdab8692e63" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.751478 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.753467 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.755937 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.756003 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.756339 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.756534 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.767989 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9"] Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.809486 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v49d9\" (UID: \"067ff64a-f49c-4ca8-8c50-f49e2886a445\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.809527 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkpbk\" (UniqueName: \"kubernetes.io/projected/067ff64a-f49c-4ca8-8c50-f49e2886a445-kube-api-access-wkpbk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v49d9\" (UID: \"067ff64a-f49c-4ca8-8c50-f49e2886a445\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.809831 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v49d9\" (UID: \"067ff64a-f49c-4ca8-8c50-f49e2886a445\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.809940 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v49d9\" (UID: \"067ff64a-f49c-4ca8-8c50-f49e2886a445\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.911677 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v49d9\" (UID: \"067ff64a-f49c-4ca8-8c50-f49e2886a445\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.911782 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkpbk\" (UniqueName: \"kubernetes.io/projected/067ff64a-f49c-4ca8-8c50-f49e2886a445-kube-api-access-wkpbk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v49d9\" (UID: \"067ff64a-f49c-4ca8-8c50-f49e2886a445\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.911992 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v49d9\" (UID: \"067ff64a-f49c-4ca8-8c50-f49e2886a445\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.912095 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v49d9\" (UID: \"067ff64a-f49c-4ca8-8c50-f49e2886a445\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.915843 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v49d9\" (UID: \"067ff64a-f49c-4ca8-8c50-f49e2886a445\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.915976 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v49d9\" (UID: \"067ff64a-f49c-4ca8-8c50-f49e2886a445\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.925331 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v49d9\" (UID: \"067ff64a-f49c-4ca8-8c50-f49e2886a445\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" Nov 25 07:56:51 crc kubenswrapper[5043]: I1125 07:56:51.928347 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkpbk\" (UniqueName: \"kubernetes.io/projected/067ff64a-f49c-4ca8-8c50-f49e2886a445-kube-api-access-wkpbk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-v49d9\" (UID: \"067ff64a-f49c-4ca8-8c50-f49e2886a445\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" Nov 25 07:56:52 crc kubenswrapper[5043]: I1125 07:56:52.070262 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" Nov 25 07:56:52 crc kubenswrapper[5043]: I1125 07:56:52.575232 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9"] Nov 25 07:56:52 crc kubenswrapper[5043]: I1125 07:56:52.671342 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" event={"ID":"067ff64a-f49c-4ca8-8c50-f49e2886a445","Type":"ContainerStarted","Data":"ced9ea06cb04766017c89026ed8f31dc68d90906080769d0397ef474997a4e74"} Nov 25 07:56:53 crc kubenswrapper[5043]: I1125 07:56:53.689210 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" event={"ID":"067ff64a-f49c-4ca8-8c50-f49e2886a445","Type":"ContainerStarted","Data":"ed8c57bb961867e2b55b1e99644507cc8cca6690e2e730a753cd39ce5ae01c53"} Nov 25 07:56:53 crc kubenswrapper[5043]: I1125 07:56:53.962969 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:56:53 crc kubenswrapper[5043]: E1125 07:56:53.963200 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:57:06 crc kubenswrapper[5043]: I1125 07:57:06.971634 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:57:06 crc kubenswrapper[5043]: E1125 07:57:06.972882 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:57:19 crc kubenswrapper[5043]: I1125 07:57:19.582400 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" podStartSLOduration=28.16771377 podStartE2EDuration="28.582363009s" podCreationTimestamp="2025-11-25 07:56:51 +0000 UTC" firstStartedPulling="2025-11-25 07:56:52.579553023 +0000 UTC m=+2476.747748744" lastFinishedPulling="2025-11-25 07:56:52.994202262 +0000 UTC m=+2477.162397983" observedRunningTime="2025-11-25 07:56:53.711340105 +0000 UTC m=+2477.879535826" watchObservedRunningTime="2025-11-25 07:57:19.582363009 +0000 UTC m=+2503.750558770" Nov 25 07:57:19 crc kubenswrapper[5043]: I1125 07:57:19.599431 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-79hh2"] Nov 25 07:57:19 crc kubenswrapper[5043]: I1125 07:57:19.601789 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-79hh2" Nov 25 07:57:19 crc kubenswrapper[5043]: I1125 07:57:19.613215 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-79hh2"] Nov 25 07:57:19 crc kubenswrapper[5043]: I1125 07:57:19.735426 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1481916-5889-4d23-905e-8a28d309f58e-catalog-content\") pod \"certified-operators-79hh2\" (UID: \"c1481916-5889-4d23-905e-8a28d309f58e\") " pod="openshift-marketplace/certified-operators-79hh2" Nov 25 07:57:19 crc kubenswrapper[5043]: I1125 07:57:19.735548 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1481916-5889-4d23-905e-8a28d309f58e-utilities\") pod \"certified-operators-79hh2\" (UID: \"c1481916-5889-4d23-905e-8a28d309f58e\") " pod="openshift-marketplace/certified-operators-79hh2" Nov 25 07:57:19 crc kubenswrapper[5043]: I1125 07:57:19.735593 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s8nd\" (UniqueName: \"kubernetes.io/projected/c1481916-5889-4d23-905e-8a28d309f58e-kube-api-access-2s8nd\") pod \"certified-operators-79hh2\" (UID: \"c1481916-5889-4d23-905e-8a28d309f58e\") " pod="openshift-marketplace/certified-operators-79hh2" Nov 25 07:57:19 crc kubenswrapper[5043]: I1125 07:57:19.837172 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1481916-5889-4d23-905e-8a28d309f58e-catalog-content\") pod \"certified-operators-79hh2\" (UID: \"c1481916-5889-4d23-905e-8a28d309f58e\") " pod="openshift-marketplace/certified-operators-79hh2" Nov 25 07:57:19 crc kubenswrapper[5043]: I1125 07:57:19.837252 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1481916-5889-4d23-905e-8a28d309f58e-utilities\") pod \"certified-operators-79hh2\" (UID: \"c1481916-5889-4d23-905e-8a28d309f58e\") " pod="openshift-marketplace/certified-operators-79hh2" Nov 25 07:57:19 crc kubenswrapper[5043]: I1125 07:57:19.837305 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s8nd\" (UniqueName: \"kubernetes.io/projected/c1481916-5889-4d23-905e-8a28d309f58e-kube-api-access-2s8nd\") pod \"certified-operators-79hh2\" (UID: \"c1481916-5889-4d23-905e-8a28d309f58e\") " pod="openshift-marketplace/certified-operators-79hh2" Nov 25 07:57:19 crc kubenswrapper[5043]: I1125 07:57:19.838306 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1481916-5889-4d23-905e-8a28d309f58e-catalog-content\") pod \"certified-operators-79hh2\" (UID: \"c1481916-5889-4d23-905e-8a28d309f58e\") " pod="openshift-marketplace/certified-operators-79hh2" Nov 25 07:57:19 crc kubenswrapper[5043]: I1125 07:57:19.838596 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1481916-5889-4d23-905e-8a28d309f58e-utilities\") pod \"certified-operators-79hh2\" (UID: \"c1481916-5889-4d23-905e-8a28d309f58e\") " pod="openshift-marketplace/certified-operators-79hh2" Nov 25 07:57:19 crc kubenswrapper[5043]: I1125 07:57:19.859794 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s8nd\" (UniqueName: \"kubernetes.io/projected/c1481916-5889-4d23-905e-8a28d309f58e-kube-api-access-2s8nd\") pod \"certified-operators-79hh2\" (UID: \"c1481916-5889-4d23-905e-8a28d309f58e\") " pod="openshift-marketplace/certified-operators-79hh2" Nov 25 07:57:19 crc kubenswrapper[5043]: I1125 07:57:19.931672 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-79hh2" Nov 25 07:57:20 crc kubenswrapper[5043]: I1125 07:57:20.398017 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-79hh2"] Nov 25 07:57:20 crc kubenswrapper[5043]: I1125 07:57:20.941906 5043 generic.go:334] "Generic (PLEG): container finished" podID="c1481916-5889-4d23-905e-8a28d309f58e" containerID="fd6e14ec97044e3d7961c4fc03a19308ee206478e9b2601e4b1ae8def4b06d8b" exitCode=0 Nov 25 07:57:20 crc kubenswrapper[5043]: I1125 07:57:20.941964 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79hh2" event={"ID":"c1481916-5889-4d23-905e-8a28d309f58e","Type":"ContainerDied","Data":"fd6e14ec97044e3d7961c4fc03a19308ee206478e9b2601e4b1ae8def4b06d8b"} Nov 25 07:57:20 crc kubenswrapper[5043]: I1125 07:57:20.941996 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79hh2" event={"ID":"c1481916-5889-4d23-905e-8a28d309f58e","Type":"ContainerStarted","Data":"48ae26c310bed1be7df4861a8d200cc8a633e95342b7ec589a219ba4bb866120"} Nov 25 07:57:20 crc kubenswrapper[5043]: I1125 07:57:20.962691 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:57:20 crc kubenswrapper[5043]: E1125 07:57:20.963001 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:57:21 crc kubenswrapper[5043]: I1125 07:57:21.952933 5043 generic.go:334] "Generic (PLEG): container finished" podID="c1481916-5889-4d23-905e-8a28d309f58e" containerID="89a3bde36bdd293e7ffc1b737269656a0cc82e34cb862cd64070a9e41266b07c" exitCode=0 Nov 25 07:57:21 crc kubenswrapper[5043]: I1125 07:57:21.953050 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79hh2" event={"ID":"c1481916-5889-4d23-905e-8a28d309f58e","Type":"ContainerDied","Data":"89a3bde36bdd293e7ffc1b737269656a0cc82e34cb862cd64070a9e41266b07c"} Nov 25 07:57:22 crc kubenswrapper[5043]: I1125 07:57:22.980231 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79hh2" event={"ID":"c1481916-5889-4d23-905e-8a28d309f58e","Type":"ContainerStarted","Data":"6cbbb4d8a6c7be5bb3ef0ec09c14c73773f616e1add4831e2cb0e701f2395a47"} Nov 25 07:57:22 crc kubenswrapper[5043]: I1125 07:57:22.993136 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-79hh2" podStartSLOduration=2.262777177 podStartE2EDuration="3.993117255s" podCreationTimestamp="2025-11-25 07:57:19 +0000 UTC" firstStartedPulling="2025-11-25 07:57:20.945396152 +0000 UTC m=+2505.113591873" lastFinishedPulling="2025-11-25 07:57:22.67573623 +0000 UTC m=+2506.843931951" observedRunningTime="2025-11-25 07:57:22.986096417 +0000 UTC m=+2507.154292138" watchObservedRunningTime="2025-11-25 07:57:22.993117255 +0000 UTC m=+2507.161312976" Nov 25 07:57:29 crc kubenswrapper[5043]: I1125 07:57:29.932299 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-79hh2" Nov 25 07:57:29 crc kubenswrapper[5043]: I1125 07:57:29.932906 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-79hh2" Nov 25 07:57:29 crc kubenswrapper[5043]: I1125 07:57:29.997219 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-79hh2" Nov 25 07:57:30 crc kubenswrapper[5043]: I1125 07:57:30.083616 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-79hh2" Nov 25 07:57:30 crc kubenswrapper[5043]: I1125 07:57:30.238019 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-79hh2"] Nov 25 07:57:32 crc kubenswrapper[5043]: I1125 07:57:32.050332 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-79hh2" podUID="c1481916-5889-4d23-905e-8a28d309f58e" containerName="registry-server" containerID="cri-o://6cbbb4d8a6c7be5bb3ef0ec09c14c73773f616e1add4831e2cb0e701f2395a47" gracePeriod=2 Nov 25 07:57:32 crc kubenswrapper[5043]: I1125 07:57:32.493735 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-79hh2" Nov 25 07:57:32 crc kubenswrapper[5043]: I1125 07:57:32.580538 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1481916-5889-4d23-905e-8a28d309f58e-catalog-content\") pod \"c1481916-5889-4d23-905e-8a28d309f58e\" (UID: \"c1481916-5889-4d23-905e-8a28d309f58e\") " Nov 25 07:57:32 crc kubenswrapper[5043]: I1125 07:57:32.580712 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s8nd\" (UniqueName: \"kubernetes.io/projected/c1481916-5889-4d23-905e-8a28d309f58e-kube-api-access-2s8nd\") pod \"c1481916-5889-4d23-905e-8a28d309f58e\" (UID: \"c1481916-5889-4d23-905e-8a28d309f58e\") " Nov 25 07:57:32 crc kubenswrapper[5043]: I1125 07:57:32.580877 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1481916-5889-4d23-905e-8a28d309f58e-utilities\") pod \"c1481916-5889-4d23-905e-8a28d309f58e\" (UID: \"c1481916-5889-4d23-905e-8a28d309f58e\") " Nov 25 07:57:32 crc kubenswrapper[5043]: I1125 07:57:32.581972 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1481916-5889-4d23-905e-8a28d309f58e-utilities" (OuterVolumeSpecName: "utilities") pod "c1481916-5889-4d23-905e-8a28d309f58e" (UID: "c1481916-5889-4d23-905e-8a28d309f58e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:57:32 crc kubenswrapper[5043]: I1125 07:57:32.588972 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1481916-5889-4d23-905e-8a28d309f58e-kube-api-access-2s8nd" (OuterVolumeSpecName: "kube-api-access-2s8nd") pod "c1481916-5889-4d23-905e-8a28d309f58e" (UID: "c1481916-5889-4d23-905e-8a28d309f58e"). InnerVolumeSpecName "kube-api-access-2s8nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:57:32 crc kubenswrapper[5043]: I1125 07:57:32.640747 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1481916-5889-4d23-905e-8a28d309f58e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1481916-5889-4d23-905e-8a28d309f58e" (UID: "c1481916-5889-4d23-905e-8a28d309f58e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 07:57:32 crc kubenswrapper[5043]: I1125 07:57:32.683685 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1481916-5889-4d23-905e-8a28d309f58e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 07:57:32 crc kubenswrapper[5043]: I1125 07:57:32.683723 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s8nd\" (UniqueName: \"kubernetes.io/projected/c1481916-5889-4d23-905e-8a28d309f58e-kube-api-access-2s8nd\") on node \"crc\" DevicePath \"\"" Nov 25 07:57:32 crc kubenswrapper[5043]: I1125 07:57:32.683866 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1481916-5889-4d23-905e-8a28d309f58e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 07:57:32 crc kubenswrapper[5043]: I1125 07:57:32.962672 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:57:32 crc kubenswrapper[5043]: E1125 07:57:32.963167 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:57:33 crc kubenswrapper[5043]: I1125 07:57:33.060419 5043 generic.go:334] "Generic (PLEG): container finished" podID="c1481916-5889-4d23-905e-8a28d309f58e" containerID="6cbbb4d8a6c7be5bb3ef0ec09c14c73773f616e1add4831e2cb0e701f2395a47" exitCode=0 Nov 25 07:57:33 crc kubenswrapper[5043]: I1125 07:57:33.060468 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79hh2" event={"ID":"c1481916-5889-4d23-905e-8a28d309f58e","Type":"ContainerDied","Data":"6cbbb4d8a6c7be5bb3ef0ec09c14c73773f616e1add4831e2cb0e701f2395a47"} Nov 25 07:57:33 crc kubenswrapper[5043]: I1125 07:57:33.060492 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-79hh2" event={"ID":"c1481916-5889-4d23-905e-8a28d309f58e","Type":"ContainerDied","Data":"48ae26c310bed1be7df4861a8d200cc8a633e95342b7ec589a219ba4bb866120"} Nov 25 07:57:33 crc kubenswrapper[5043]: I1125 07:57:33.060518 5043 scope.go:117] "RemoveContainer" containerID="6cbbb4d8a6c7be5bb3ef0ec09c14c73773f616e1add4831e2cb0e701f2395a47" Nov 25 07:57:33 crc kubenswrapper[5043]: I1125 07:57:33.061919 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-79hh2" Nov 25 07:57:33 crc kubenswrapper[5043]: I1125 07:57:33.088018 5043 scope.go:117] "RemoveContainer" containerID="89a3bde36bdd293e7ffc1b737269656a0cc82e34cb862cd64070a9e41266b07c" Nov 25 07:57:33 crc kubenswrapper[5043]: I1125 07:57:33.102647 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-79hh2"] Nov 25 07:57:33 crc kubenswrapper[5043]: I1125 07:57:33.110704 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-79hh2"] Nov 25 07:57:33 crc kubenswrapper[5043]: I1125 07:57:33.118324 5043 scope.go:117] "RemoveContainer" containerID="fd6e14ec97044e3d7961c4fc03a19308ee206478e9b2601e4b1ae8def4b06d8b" Nov 25 07:57:33 crc kubenswrapper[5043]: I1125 07:57:33.174901 5043 scope.go:117] "RemoveContainer" containerID="6cbbb4d8a6c7be5bb3ef0ec09c14c73773f616e1add4831e2cb0e701f2395a47" Nov 25 07:57:33 crc kubenswrapper[5043]: E1125 07:57:33.175412 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cbbb4d8a6c7be5bb3ef0ec09c14c73773f616e1add4831e2cb0e701f2395a47\": container with ID starting with 6cbbb4d8a6c7be5bb3ef0ec09c14c73773f616e1add4831e2cb0e701f2395a47 not found: ID does not exist" containerID="6cbbb4d8a6c7be5bb3ef0ec09c14c73773f616e1add4831e2cb0e701f2395a47" Nov 25 07:57:33 crc kubenswrapper[5043]: I1125 07:57:33.175455 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cbbb4d8a6c7be5bb3ef0ec09c14c73773f616e1add4831e2cb0e701f2395a47"} err="failed to get container status \"6cbbb4d8a6c7be5bb3ef0ec09c14c73773f616e1add4831e2cb0e701f2395a47\": rpc error: code = NotFound desc = could not find container \"6cbbb4d8a6c7be5bb3ef0ec09c14c73773f616e1add4831e2cb0e701f2395a47\": container with ID starting with 6cbbb4d8a6c7be5bb3ef0ec09c14c73773f616e1add4831e2cb0e701f2395a47 not found: ID does not exist" Nov 25 07:57:33 crc kubenswrapper[5043]: I1125 07:57:33.175483 5043 scope.go:117] "RemoveContainer" containerID="89a3bde36bdd293e7ffc1b737269656a0cc82e34cb862cd64070a9e41266b07c" Nov 25 07:57:33 crc kubenswrapper[5043]: E1125 07:57:33.175897 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89a3bde36bdd293e7ffc1b737269656a0cc82e34cb862cd64070a9e41266b07c\": container with ID starting with 89a3bde36bdd293e7ffc1b737269656a0cc82e34cb862cd64070a9e41266b07c not found: ID does not exist" containerID="89a3bde36bdd293e7ffc1b737269656a0cc82e34cb862cd64070a9e41266b07c" Nov 25 07:57:33 crc kubenswrapper[5043]: I1125 07:57:33.175916 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89a3bde36bdd293e7ffc1b737269656a0cc82e34cb862cd64070a9e41266b07c"} err="failed to get container status \"89a3bde36bdd293e7ffc1b737269656a0cc82e34cb862cd64070a9e41266b07c\": rpc error: code = NotFound desc = could not find container \"89a3bde36bdd293e7ffc1b737269656a0cc82e34cb862cd64070a9e41266b07c\": container with ID starting with 89a3bde36bdd293e7ffc1b737269656a0cc82e34cb862cd64070a9e41266b07c not found: ID does not exist" Nov 25 07:57:33 crc kubenswrapper[5043]: I1125 07:57:33.175929 5043 scope.go:117] "RemoveContainer" containerID="fd6e14ec97044e3d7961c4fc03a19308ee206478e9b2601e4b1ae8def4b06d8b" Nov 25 07:57:33 crc kubenswrapper[5043]: E1125 07:57:33.176291 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd6e14ec97044e3d7961c4fc03a19308ee206478e9b2601e4b1ae8def4b06d8b\": container with ID starting with fd6e14ec97044e3d7961c4fc03a19308ee206478e9b2601e4b1ae8def4b06d8b not found: ID does not exist" containerID="fd6e14ec97044e3d7961c4fc03a19308ee206478e9b2601e4b1ae8def4b06d8b" Nov 25 07:57:33 crc kubenswrapper[5043]: I1125 07:57:33.176310 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd6e14ec97044e3d7961c4fc03a19308ee206478e9b2601e4b1ae8def4b06d8b"} err="failed to get container status \"fd6e14ec97044e3d7961c4fc03a19308ee206478e9b2601e4b1ae8def4b06d8b\": rpc error: code = NotFound desc = could not find container \"fd6e14ec97044e3d7961c4fc03a19308ee206478e9b2601e4b1ae8def4b06d8b\": container with ID starting with fd6e14ec97044e3d7961c4fc03a19308ee206478e9b2601e4b1ae8def4b06d8b not found: ID does not exist" Nov 25 07:57:34 crc kubenswrapper[5043]: I1125 07:57:34.975005 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1481916-5889-4d23-905e-8a28d309f58e" path="/var/lib/kubelet/pods/c1481916-5889-4d23-905e-8a28d309f58e/volumes" Nov 25 07:57:42 crc kubenswrapper[5043]: I1125 07:57:42.155599 5043 generic.go:334] "Generic (PLEG): container finished" podID="067ff64a-f49c-4ca8-8c50-f49e2886a445" containerID="ed8c57bb961867e2b55b1e99644507cc8cca6690e2e730a753cd39ce5ae01c53" exitCode=0 Nov 25 07:57:42 crc kubenswrapper[5043]: I1125 07:57:42.155763 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" event={"ID":"067ff64a-f49c-4ca8-8c50-f49e2886a445","Type":"ContainerDied","Data":"ed8c57bb961867e2b55b1e99644507cc8cca6690e2e730a753cd39ce5ae01c53"} Nov 25 07:57:43 crc kubenswrapper[5043]: I1125 07:57:43.546202 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" Nov 25 07:57:43 crc kubenswrapper[5043]: I1125 07:57:43.650567 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-inventory\") pod \"067ff64a-f49c-4ca8-8c50-f49e2886a445\" (UID: \"067ff64a-f49c-4ca8-8c50-f49e2886a445\") " Nov 25 07:57:43 crc kubenswrapper[5043]: I1125 07:57:43.650723 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkpbk\" (UniqueName: \"kubernetes.io/projected/067ff64a-f49c-4ca8-8c50-f49e2886a445-kube-api-access-wkpbk\") pod \"067ff64a-f49c-4ca8-8c50-f49e2886a445\" (UID: \"067ff64a-f49c-4ca8-8c50-f49e2886a445\") " Nov 25 07:57:43 crc kubenswrapper[5043]: I1125 07:57:43.650809 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-ssh-key\") pod \"067ff64a-f49c-4ca8-8c50-f49e2886a445\" (UID: \"067ff64a-f49c-4ca8-8c50-f49e2886a445\") " Nov 25 07:57:43 crc kubenswrapper[5043]: I1125 07:57:43.650881 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-ceph\") pod \"067ff64a-f49c-4ca8-8c50-f49e2886a445\" (UID: \"067ff64a-f49c-4ca8-8c50-f49e2886a445\") " Nov 25 07:57:43 crc kubenswrapper[5043]: I1125 07:57:43.657098 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/067ff64a-f49c-4ca8-8c50-f49e2886a445-kube-api-access-wkpbk" (OuterVolumeSpecName: "kube-api-access-wkpbk") pod "067ff64a-f49c-4ca8-8c50-f49e2886a445" (UID: "067ff64a-f49c-4ca8-8c50-f49e2886a445"). InnerVolumeSpecName "kube-api-access-wkpbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:57:43 crc kubenswrapper[5043]: I1125 07:57:43.664128 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-ceph" (OuterVolumeSpecName: "ceph") pod "067ff64a-f49c-4ca8-8c50-f49e2886a445" (UID: "067ff64a-f49c-4ca8-8c50-f49e2886a445"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:57:43 crc kubenswrapper[5043]: I1125 07:57:43.684847 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-inventory" (OuterVolumeSpecName: "inventory") pod "067ff64a-f49c-4ca8-8c50-f49e2886a445" (UID: "067ff64a-f49c-4ca8-8c50-f49e2886a445"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:57:43 crc kubenswrapper[5043]: I1125 07:57:43.689478 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "067ff64a-f49c-4ca8-8c50-f49e2886a445" (UID: "067ff64a-f49c-4ca8-8c50-f49e2886a445"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:57:43 crc kubenswrapper[5043]: I1125 07:57:43.752906 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:57:43 crc kubenswrapper[5043]: I1125 07:57:43.752951 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkpbk\" (UniqueName: \"kubernetes.io/projected/067ff64a-f49c-4ca8-8c50-f49e2886a445-kube-api-access-wkpbk\") on node \"crc\" DevicePath \"\"" Nov 25 07:57:43 crc kubenswrapper[5043]: I1125 07:57:43.752964 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:57:43 crc kubenswrapper[5043]: I1125 07:57:43.752973 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/067ff64a-f49c-4ca8-8c50-f49e2886a445-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.176115 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" event={"ID":"067ff64a-f49c-4ca8-8c50-f49e2886a445","Type":"ContainerDied","Data":"ced9ea06cb04766017c89026ed8f31dc68d90906080769d0397ef474997a4e74"} Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.176165 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ced9ea06cb04766017c89026ed8f31dc68d90906080769d0397ef474997a4e74" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.176184 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-v49d9" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.278633 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-khlgq"] Nov 25 07:57:44 crc kubenswrapper[5043]: E1125 07:57:44.279147 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067ff64a-f49c-4ca8-8c50-f49e2886a445" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.279172 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="067ff64a-f49c-4ca8-8c50-f49e2886a445" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:57:44 crc kubenswrapper[5043]: E1125 07:57:44.279199 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1481916-5889-4d23-905e-8a28d309f58e" containerName="extract-utilities" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.279209 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1481916-5889-4d23-905e-8a28d309f58e" containerName="extract-utilities" Nov 25 07:57:44 crc kubenswrapper[5043]: E1125 07:57:44.279231 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1481916-5889-4d23-905e-8a28d309f58e" containerName="extract-content" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.279240 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1481916-5889-4d23-905e-8a28d309f58e" containerName="extract-content" Nov 25 07:57:44 crc kubenswrapper[5043]: E1125 07:57:44.279262 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1481916-5889-4d23-905e-8a28d309f58e" containerName="registry-server" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.279271 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1481916-5889-4d23-905e-8a28d309f58e" containerName="registry-server" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.279488 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="067ff64a-f49c-4ca8-8c50-f49e2886a445" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.279532 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1481916-5889-4d23-905e-8a28d309f58e" containerName="registry-server" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.280360 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.282427 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.283067 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.283267 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.283291 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.283490 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.285361 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-khlgq"] Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.463525 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-khlgq\" (UID: \"63717215-03b7-4e3c-9224-004f5e3b8cfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.464095 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9wjk\" (UniqueName: \"kubernetes.io/projected/63717215-03b7-4e3c-9224-004f5e3b8cfe-kube-api-access-j9wjk\") pod \"ssh-known-hosts-edpm-deployment-khlgq\" (UID: \"63717215-03b7-4e3c-9224-004f5e3b8cfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.464149 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-khlgq\" (UID: \"63717215-03b7-4e3c-9224-004f5e3b8cfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.464184 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-ceph\") pod \"ssh-known-hosts-edpm-deployment-khlgq\" (UID: \"63717215-03b7-4e3c-9224-004f5e3b8cfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.565869 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9wjk\" (UniqueName: \"kubernetes.io/projected/63717215-03b7-4e3c-9224-004f5e3b8cfe-kube-api-access-j9wjk\") pod \"ssh-known-hosts-edpm-deployment-khlgq\" (UID: \"63717215-03b7-4e3c-9224-004f5e3b8cfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.565951 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-khlgq\" (UID: \"63717215-03b7-4e3c-9224-004f5e3b8cfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.565987 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-ceph\") pod \"ssh-known-hosts-edpm-deployment-khlgq\" (UID: \"63717215-03b7-4e3c-9224-004f5e3b8cfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.566141 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-khlgq\" (UID: \"63717215-03b7-4e3c-9224-004f5e3b8cfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.570519 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-khlgq\" (UID: \"63717215-03b7-4e3c-9224-004f5e3b8cfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.571042 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-ceph\") pod \"ssh-known-hosts-edpm-deployment-khlgq\" (UID: \"63717215-03b7-4e3c-9224-004f5e3b8cfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.572038 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-khlgq\" (UID: \"63717215-03b7-4e3c-9224-004f5e3b8cfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.589401 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9wjk\" (UniqueName: \"kubernetes.io/projected/63717215-03b7-4e3c-9224-004f5e3b8cfe-kube-api-access-j9wjk\") pod \"ssh-known-hosts-edpm-deployment-khlgq\" (UID: \"63717215-03b7-4e3c-9224-004f5e3b8cfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.599231 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" Nov 25 07:57:44 crc kubenswrapper[5043]: I1125 07:57:44.963365 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:57:44 crc kubenswrapper[5043]: E1125 07:57:44.964091 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 07:57:45 crc kubenswrapper[5043]: I1125 07:57:45.137941 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-khlgq"] Nov 25 07:57:45 crc kubenswrapper[5043]: I1125 07:57:45.186496 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" event={"ID":"63717215-03b7-4e3c-9224-004f5e3b8cfe","Type":"ContainerStarted","Data":"3d892cb149f48455fa89e1daf3e4eb40d718464b43e9ffcb1336d1f9c9f725f5"} Nov 25 07:57:46 crc kubenswrapper[5043]: I1125 07:57:46.197597 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" event={"ID":"63717215-03b7-4e3c-9224-004f5e3b8cfe","Type":"ContainerStarted","Data":"0f1ea41afc2d000c62a75010b820a042d2486b5f3e79eb41e2880636914b5892"} Nov 25 07:57:46 crc kubenswrapper[5043]: I1125 07:57:46.219570 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" podStartSLOduration=1.8044367019999998 podStartE2EDuration="2.219551633s" podCreationTimestamp="2025-11-25 07:57:44 +0000 UTC" firstStartedPulling="2025-11-25 07:57:45.145235145 +0000 UTC m=+2529.313430866" lastFinishedPulling="2025-11-25 07:57:45.560350046 +0000 UTC m=+2529.728545797" observedRunningTime="2025-11-25 07:57:46.212650087 +0000 UTC m=+2530.380845818" watchObservedRunningTime="2025-11-25 07:57:46.219551633 +0000 UTC m=+2530.387747354" Nov 25 07:57:51 crc kubenswrapper[5043]: I1125 07:57:51.968078 5043 scope.go:117] "RemoveContainer" containerID="d0d35a4b890ecd9adfa055f93228d5ac9703a7501f3dccb86c387cfa4f1502ea" Nov 25 07:57:51 crc kubenswrapper[5043]: I1125 07:57:51.989393 5043 scope.go:117] "RemoveContainer" containerID="232c0259197a20f199d96f7fbf68935ef1d195319dd6e48f5374ee85ab5608e6" Nov 25 07:57:52 crc kubenswrapper[5043]: I1125 07:57:52.010240 5043 scope.go:117] "RemoveContainer" containerID="308cac89d402ad957e40e10ef3d21435db0941b923cbba5094e48409e923871e" Nov 25 07:57:56 crc kubenswrapper[5043]: I1125 07:57:56.306017 5043 generic.go:334] "Generic (PLEG): container finished" podID="63717215-03b7-4e3c-9224-004f5e3b8cfe" containerID="0f1ea41afc2d000c62a75010b820a042d2486b5f3e79eb41e2880636914b5892" exitCode=0 Nov 25 07:57:56 crc kubenswrapper[5043]: I1125 07:57:56.306107 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" event={"ID":"63717215-03b7-4e3c-9224-004f5e3b8cfe","Type":"ContainerDied","Data":"0f1ea41afc2d000c62a75010b820a042d2486b5f3e79eb41e2880636914b5892"} Nov 25 07:57:57 crc kubenswrapper[5043]: I1125 07:57:57.679252 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" Nov 25 07:57:57 crc kubenswrapper[5043]: I1125 07:57:57.831623 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-inventory-0\") pod \"63717215-03b7-4e3c-9224-004f5e3b8cfe\" (UID: \"63717215-03b7-4e3c-9224-004f5e3b8cfe\") " Nov 25 07:57:57 crc kubenswrapper[5043]: I1125 07:57:57.831890 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-ceph\") pod \"63717215-03b7-4e3c-9224-004f5e3b8cfe\" (UID: \"63717215-03b7-4e3c-9224-004f5e3b8cfe\") " Nov 25 07:57:57 crc kubenswrapper[5043]: I1125 07:57:57.832113 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-ssh-key-openstack-edpm-ipam\") pod \"63717215-03b7-4e3c-9224-004f5e3b8cfe\" (UID: \"63717215-03b7-4e3c-9224-004f5e3b8cfe\") " Nov 25 07:57:57 crc kubenswrapper[5043]: I1125 07:57:57.832251 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9wjk\" (UniqueName: \"kubernetes.io/projected/63717215-03b7-4e3c-9224-004f5e3b8cfe-kube-api-access-j9wjk\") pod \"63717215-03b7-4e3c-9224-004f5e3b8cfe\" (UID: \"63717215-03b7-4e3c-9224-004f5e3b8cfe\") " Nov 25 07:57:57 crc kubenswrapper[5043]: I1125 07:57:57.844264 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-ceph" (OuterVolumeSpecName: "ceph") pod "63717215-03b7-4e3c-9224-004f5e3b8cfe" (UID: "63717215-03b7-4e3c-9224-004f5e3b8cfe"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:57:57 crc kubenswrapper[5043]: I1125 07:57:57.844507 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63717215-03b7-4e3c-9224-004f5e3b8cfe-kube-api-access-j9wjk" (OuterVolumeSpecName: "kube-api-access-j9wjk") pod "63717215-03b7-4e3c-9224-004f5e3b8cfe" (UID: "63717215-03b7-4e3c-9224-004f5e3b8cfe"). InnerVolumeSpecName "kube-api-access-j9wjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:57:57 crc kubenswrapper[5043]: I1125 07:57:57.855824 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "63717215-03b7-4e3c-9224-004f5e3b8cfe" (UID: "63717215-03b7-4e3c-9224-004f5e3b8cfe"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:57:57 crc kubenswrapper[5043]: I1125 07:57:57.875996 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "63717215-03b7-4e3c-9224-004f5e3b8cfe" (UID: "63717215-03b7-4e3c-9224-004f5e3b8cfe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:57:57 crc kubenswrapper[5043]: I1125 07:57:57.936831 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 07:57:57 crc kubenswrapper[5043]: I1125 07:57:57.936876 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9wjk\" (UniqueName: \"kubernetes.io/projected/63717215-03b7-4e3c-9224-004f5e3b8cfe-kube-api-access-j9wjk\") on node \"crc\" DevicePath \"\"" Nov 25 07:57:57 crc kubenswrapper[5043]: I1125 07:57:57.936912 5043 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:57:57 crc kubenswrapper[5043]: I1125 07:57:57.936929 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/63717215-03b7-4e3c-9224-004f5e3b8cfe-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 07:57:57 crc kubenswrapper[5043]: I1125 07:57:57.962348 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.329272 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" event={"ID":"63717215-03b7-4e3c-9224-004f5e3b8cfe","Type":"ContainerDied","Data":"3d892cb149f48455fa89e1daf3e4eb40d718464b43e9ffcb1336d1f9c9f725f5"} Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.329547 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d892cb149f48455fa89e1daf3e4eb40d718464b43e9ffcb1336d1f9c9f725f5" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.329359 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-khlgq" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.334432 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"f3b371f6f213dc1b8fdb5b0f0313c26428ee4292c66cb96a1f2daaf1fae57efc"} Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.429768 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq"] Nov 25 07:57:58 crc kubenswrapper[5043]: E1125 07:57:58.430173 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63717215-03b7-4e3c-9224-004f5e3b8cfe" containerName="ssh-known-hosts-edpm-deployment" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.430188 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="63717215-03b7-4e3c-9224-004f5e3b8cfe" containerName="ssh-known-hosts-edpm-deployment" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.430354 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="63717215-03b7-4e3c-9224-004f5e3b8cfe" containerName="ssh-known-hosts-edpm-deployment" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.431999 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.436729 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.439270 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.439423 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.439531 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.440129 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.452905 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq"] Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.549054 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd7gq\" (UID: \"5552c355-eb5a-4242-b79d-f9e1962c31f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.549429 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd7gq\" (UID: \"5552c355-eb5a-4242-b79d-f9e1962c31f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.549506 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f64n8\" (UniqueName: \"kubernetes.io/projected/5552c355-eb5a-4242-b79d-f9e1962c31f1-kube-api-access-f64n8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd7gq\" (UID: \"5552c355-eb5a-4242-b79d-f9e1962c31f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.549579 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd7gq\" (UID: \"5552c355-eb5a-4242-b79d-f9e1962c31f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.650968 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f64n8\" (UniqueName: \"kubernetes.io/projected/5552c355-eb5a-4242-b79d-f9e1962c31f1-kube-api-access-f64n8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd7gq\" (UID: \"5552c355-eb5a-4242-b79d-f9e1962c31f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.651048 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd7gq\" (UID: \"5552c355-eb5a-4242-b79d-f9e1962c31f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.652707 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd7gq\" (UID: \"5552c355-eb5a-4242-b79d-f9e1962c31f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.652755 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd7gq\" (UID: \"5552c355-eb5a-4242-b79d-f9e1962c31f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.659503 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd7gq\" (UID: \"5552c355-eb5a-4242-b79d-f9e1962c31f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.666520 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd7gq\" (UID: \"5552c355-eb5a-4242-b79d-f9e1962c31f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.676189 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd7gq\" (UID: \"5552c355-eb5a-4242-b79d-f9e1962c31f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.677140 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f64n8\" (UniqueName: \"kubernetes.io/projected/5552c355-eb5a-4242-b79d-f9e1962c31f1-kube-api-access-f64n8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd7gq\" (UID: \"5552c355-eb5a-4242-b79d-f9e1962c31f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" Nov 25 07:57:58 crc kubenswrapper[5043]: I1125 07:57:58.766016 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" Nov 25 07:57:59 crc kubenswrapper[5043]: I1125 07:57:59.271696 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq"] Nov 25 07:57:59 crc kubenswrapper[5043]: W1125 07:57:59.275800 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5552c355_eb5a_4242_b79d_f9e1962c31f1.slice/crio-c5590af2a3b0ac86c06190169c453aa1e6da3ac21c31dbd5e93bdfe2f578f3cb WatchSource:0}: Error finding container c5590af2a3b0ac86c06190169c453aa1e6da3ac21c31dbd5e93bdfe2f578f3cb: Status 404 returned error can't find the container with id c5590af2a3b0ac86c06190169c453aa1e6da3ac21c31dbd5e93bdfe2f578f3cb Nov 25 07:57:59 crc kubenswrapper[5043]: I1125 07:57:59.348140 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" event={"ID":"5552c355-eb5a-4242-b79d-f9e1962c31f1","Type":"ContainerStarted","Data":"c5590af2a3b0ac86c06190169c453aa1e6da3ac21c31dbd5e93bdfe2f578f3cb"} Nov 25 07:58:00 crc kubenswrapper[5043]: I1125 07:58:00.359670 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" event={"ID":"5552c355-eb5a-4242-b79d-f9e1962c31f1","Type":"ContainerStarted","Data":"dea7456d10a644c9548e2cfaa9cb8a915cea9c266ba58a703adac2f49e11fa94"} Nov 25 07:58:00 crc kubenswrapper[5043]: I1125 07:58:00.391065 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" podStartSLOduration=1.9329931409999999 podStartE2EDuration="2.391037405s" podCreationTimestamp="2025-11-25 07:57:58 +0000 UTC" firstStartedPulling="2025-11-25 07:57:59.278167982 +0000 UTC m=+2543.446363703" lastFinishedPulling="2025-11-25 07:57:59.736212236 +0000 UTC m=+2543.904407967" observedRunningTime="2025-11-25 07:58:00.383777721 +0000 UTC m=+2544.551973442" watchObservedRunningTime="2025-11-25 07:58:00.391037405 +0000 UTC m=+2544.559233166" Nov 25 07:58:08 crc kubenswrapper[5043]: I1125 07:58:08.459722 5043 generic.go:334] "Generic (PLEG): container finished" podID="5552c355-eb5a-4242-b79d-f9e1962c31f1" containerID="dea7456d10a644c9548e2cfaa9cb8a915cea9c266ba58a703adac2f49e11fa94" exitCode=0 Nov 25 07:58:08 crc kubenswrapper[5043]: I1125 07:58:08.459825 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" event={"ID":"5552c355-eb5a-4242-b79d-f9e1962c31f1","Type":"ContainerDied","Data":"dea7456d10a644c9548e2cfaa9cb8a915cea9c266ba58a703adac2f49e11fa94"} Nov 25 07:58:09 crc kubenswrapper[5043]: I1125 07:58:09.925490 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.093127 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f64n8\" (UniqueName: \"kubernetes.io/projected/5552c355-eb5a-4242-b79d-f9e1962c31f1-kube-api-access-f64n8\") pod \"5552c355-eb5a-4242-b79d-f9e1962c31f1\" (UID: \"5552c355-eb5a-4242-b79d-f9e1962c31f1\") " Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.093569 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-ceph\") pod \"5552c355-eb5a-4242-b79d-f9e1962c31f1\" (UID: \"5552c355-eb5a-4242-b79d-f9e1962c31f1\") " Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.094651 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-inventory\") pod \"5552c355-eb5a-4242-b79d-f9e1962c31f1\" (UID: \"5552c355-eb5a-4242-b79d-f9e1962c31f1\") " Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.094843 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-ssh-key\") pod \"5552c355-eb5a-4242-b79d-f9e1962c31f1\" (UID: \"5552c355-eb5a-4242-b79d-f9e1962c31f1\") " Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.099955 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5552c355-eb5a-4242-b79d-f9e1962c31f1-kube-api-access-f64n8" (OuterVolumeSpecName: "kube-api-access-f64n8") pod "5552c355-eb5a-4242-b79d-f9e1962c31f1" (UID: "5552c355-eb5a-4242-b79d-f9e1962c31f1"). InnerVolumeSpecName "kube-api-access-f64n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.100796 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-ceph" (OuterVolumeSpecName: "ceph") pod "5552c355-eb5a-4242-b79d-f9e1962c31f1" (UID: "5552c355-eb5a-4242-b79d-f9e1962c31f1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.127332 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-inventory" (OuterVolumeSpecName: "inventory") pod "5552c355-eb5a-4242-b79d-f9e1962c31f1" (UID: "5552c355-eb5a-4242-b79d-f9e1962c31f1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.130474 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5552c355-eb5a-4242-b79d-f9e1962c31f1" (UID: "5552c355-eb5a-4242-b79d-f9e1962c31f1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.197679 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f64n8\" (UniqueName: \"kubernetes.io/projected/5552c355-eb5a-4242-b79d-f9e1962c31f1-kube-api-access-f64n8\") on node \"crc\" DevicePath \"\"" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.197722 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.197742 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.197763 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5552c355-eb5a-4242-b79d-f9e1962c31f1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.481946 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" event={"ID":"5552c355-eb5a-4242-b79d-f9e1962c31f1","Type":"ContainerDied","Data":"c5590af2a3b0ac86c06190169c453aa1e6da3ac21c31dbd5e93bdfe2f578f3cb"} Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.482002 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5590af2a3b0ac86c06190169c453aa1e6da3ac21c31dbd5e93bdfe2f578f3cb" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.482012 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd7gq" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.580698 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq"] Nov 25 07:58:10 crc kubenswrapper[5043]: E1125 07:58:10.581368 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5552c355-eb5a-4242-b79d-f9e1962c31f1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.581393 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="5552c355-eb5a-4242-b79d-f9e1962c31f1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.581717 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="5552c355-eb5a-4242-b79d-f9e1962c31f1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.583889 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.587042 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.587312 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.587531 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.587755 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.587999 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.604680 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq"] Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.706912 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq\" (UID: \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.707222 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq\" (UID: \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.707372 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wlgg\" (UniqueName: \"kubernetes.io/projected/dd890a18-8d10-41bb-bd31-5e10dc9c3752-kube-api-access-2wlgg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq\" (UID: \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.707409 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq\" (UID: \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.808670 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wlgg\" (UniqueName: \"kubernetes.io/projected/dd890a18-8d10-41bb-bd31-5e10dc9c3752-kube-api-access-2wlgg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq\" (UID: \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.809084 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq\" (UID: \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.809210 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq\" (UID: \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.809336 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq\" (UID: \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.815660 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq\" (UID: \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.816486 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq\" (UID: \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.817718 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq\" (UID: \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.832552 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wlgg\" (UniqueName: \"kubernetes.io/projected/dd890a18-8d10-41bb-bd31-5e10dc9c3752-kube-api-access-2wlgg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq\" (UID: \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" Nov 25 07:58:10 crc kubenswrapper[5043]: I1125 07:58:10.911426 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" Nov 25 07:58:11 crc kubenswrapper[5043]: I1125 07:58:11.509167 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq"] Nov 25 07:58:14 crc kubenswrapper[5043]: I1125 07:58:12.511294 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" event={"ID":"dd890a18-8d10-41bb-bd31-5e10dc9c3752","Type":"ContainerStarted","Data":"4fa3ff137c0fe9a7f5de57768ddcb9611c0c53ce7f46088efdcdc2b254dbb06a"} Nov 25 07:58:14 crc kubenswrapper[5043]: I1125 07:58:12.511639 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" event={"ID":"dd890a18-8d10-41bb-bd31-5e10dc9c3752","Type":"ContainerStarted","Data":"a0af42aeaa85a0c3a08fc87e48a0a30c02dd069f8ea5edf7cb8fcf6f6e1c73ff"} Nov 25 07:58:14 crc kubenswrapper[5043]: I1125 07:58:12.531264 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" podStartSLOduration=2.088077029 podStartE2EDuration="2.531246544s" podCreationTimestamp="2025-11-25 07:58:10 +0000 UTC" firstStartedPulling="2025-11-25 07:58:11.51814226 +0000 UTC m=+2555.686337991" lastFinishedPulling="2025-11-25 07:58:11.961311795 +0000 UTC m=+2556.129507506" observedRunningTime="2025-11-25 07:58:12.528479989 +0000 UTC m=+2556.696675770" watchObservedRunningTime="2025-11-25 07:58:12.531246544 +0000 UTC m=+2556.699442265" Nov 25 07:58:24 crc kubenswrapper[5043]: I1125 07:58:24.624182 5043 generic.go:334] "Generic (PLEG): container finished" podID="dd890a18-8d10-41bb-bd31-5e10dc9c3752" containerID="4fa3ff137c0fe9a7f5de57768ddcb9611c0c53ce7f46088efdcdc2b254dbb06a" exitCode=0 Nov 25 07:58:24 crc kubenswrapper[5043]: I1125 07:58:24.624364 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" event={"ID":"dd890a18-8d10-41bb-bd31-5e10dc9c3752","Type":"ContainerDied","Data":"4fa3ff137c0fe9a7f5de57768ddcb9611c0c53ce7f46088efdcdc2b254dbb06a"} Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.073668 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.094690 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-ssh-key\") pod \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\" (UID: \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\") " Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.094792 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-inventory\") pod \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\" (UID: \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\") " Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.094851 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wlgg\" (UniqueName: \"kubernetes.io/projected/dd890a18-8d10-41bb-bd31-5e10dc9c3752-kube-api-access-2wlgg\") pod \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\" (UID: \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\") " Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.094933 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-ceph\") pod \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\" (UID: \"dd890a18-8d10-41bb-bd31-5e10dc9c3752\") " Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.105957 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd890a18-8d10-41bb-bd31-5e10dc9c3752-kube-api-access-2wlgg" (OuterVolumeSpecName: "kube-api-access-2wlgg") pod "dd890a18-8d10-41bb-bd31-5e10dc9c3752" (UID: "dd890a18-8d10-41bb-bd31-5e10dc9c3752"). InnerVolumeSpecName "kube-api-access-2wlgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.107575 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-ceph" (OuterVolumeSpecName: "ceph") pod "dd890a18-8d10-41bb-bd31-5e10dc9c3752" (UID: "dd890a18-8d10-41bb-bd31-5e10dc9c3752"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.126509 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dd890a18-8d10-41bb-bd31-5e10dc9c3752" (UID: "dd890a18-8d10-41bb-bd31-5e10dc9c3752"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.127133 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-inventory" (OuterVolumeSpecName: "inventory") pod "dd890a18-8d10-41bb-bd31-5e10dc9c3752" (UID: "dd890a18-8d10-41bb-bd31-5e10dc9c3752"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.196809 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.196833 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.196843 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wlgg\" (UniqueName: \"kubernetes.io/projected/dd890a18-8d10-41bb-bd31-5e10dc9c3752-kube-api-access-2wlgg\") on node \"crc\" DevicePath \"\"" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.196853 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd890a18-8d10-41bb-bd31-5e10dc9c3752-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.644236 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" event={"ID":"dd890a18-8d10-41bb-bd31-5e10dc9c3752","Type":"ContainerDied","Data":"a0af42aeaa85a0c3a08fc87e48a0a30c02dd069f8ea5edf7cb8fcf6f6e1c73ff"} Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.644281 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0af42aeaa85a0c3a08fc87e48a0a30c02dd069f8ea5edf7cb8fcf6f6e1c73ff" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.644341 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.781845 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t"] Nov 25 07:58:26 crc kubenswrapper[5043]: E1125 07:58:26.782680 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd890a18-8d10-41bb-bd31-5e10dc9c3752" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.782707 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd890a18-8d10-41bb-bd31-5e10dc9c3752" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.782919 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd890a18-8d10-41bb-bd31-5e10dc9c3752" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.783723 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.785883 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.786052 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.787849 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.788026 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.788221 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.788409 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.788985 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.789219 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.791851 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t"] Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.807698 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.807813 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.807844 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.807872 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.807896 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.807934 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlpx6\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-kube-api-access-vlpx6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.808002 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.808042 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.808082 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.808121 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.808149 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.808175 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.808264 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.909954 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.910007 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.910032 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.910050 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.910077 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlpx6\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-kube-api-access-vlpx6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.910127 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.910160 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.910199 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.910239 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.910269 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.910300 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.910326 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.910373 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.914946 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.915019 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.915704 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.915945 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.916311 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.916373 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.916888 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.916960 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.917629 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.919500 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.919531 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.919687 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:26 crc kubenswrapper[5043]: I1125 07:58:26.931183 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlpx6\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-kube-api-access-vlpx6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:27 crc kubenswrapper[5043]: I1125 07:58:27.105025 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:58:27 crc kubenswrapper[5043]: I1125 07:58:27.677057 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t"] Nov 25 07:58:28 crc kubenswrapper[5043]: I1125 07:58:28.682482 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" event={"ID":"fc8c648b-1e0d-4b4b-b2f2-96e64441de99","Type":"ContainerStarted","Data":"47aa5fafc9e8a9268943ee5b66279db40e80781fc63bf10fc5e54b94462702c6"} Nov 25 07:58:28 crc kubenswrapper[5043]: I1125 07:58:28.683154 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" event={"ID":"fc8c648b-1e0d-4b4b-b2f2-96e64441de99","Type":"ContainerStarted","Data":"d83e6744cb7a6ee7d7ea141072557210600870fd3fd6600813b48614768fd7d0"} Nov 25 07:58:28 crc kubenswrapper[5043]: I1125 07:58:28.724914 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" podStartSLOduration=2.085084617 podStartE2EDuration="2.724899343s" podCreationTimestamp="2025-11-25 07:58:26 +0000 UTC" firstStartedPulling="2025-11-25 07:58:27.684582499 +0000 UTC m=+2571.852778240" lastFinishedPulling="2025-11-25 07:58:28.324397245 +0000 UTC m=+2572.492592966" observedRunningTime="2025-11-25 07:58:28.712081747 +0000 UTC m=+2572.880277488" watchObservedRunningTime="2025-11-25 07:58:28.724899343 +0000 UTC m=+2572.893095054" Nov 25 07:59:03 crc kubenswrapper[5043]: I1125 07:59:03.009420 5043 generic.go:334] "Generic (PLEG): container finished" podID="fc8c648b-1e0d-4b4b-b2f2-96e64441de99" containerID="47aa5fafc9e8a9268943ee5b66279db40e80781fc63bf10fc5e54b94462702c6" exitCode=0 Nov 25 07:59:03 crc kubenswrapper[5043]: I1125 07:59:03.009521 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" event={"ID":"fc8c648b-1e0d-4b4b-b2f2-96e64441de99","Type":"ContainerDied","Data":"47aa5fafc9e8a9268943ee5b66279db40e80781fc63bf10fc5e54b94462702c6"} Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.442827 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.612170 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-neutron-metadata-combined-ca-bundle\") pod \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.612229 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.612253 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-libvirt-combined-ca-bundle\") pod \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.612270 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ssh-key\") pod \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.612290 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlpx6\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-kube-api-access-vlpx6\") pod \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.612332 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-inventory\") pod \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.612362 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ovn-combined-ca-bundle\") pod \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.612411 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ceph\") pod \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.612436 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.612463 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-bootstrap-combined-ca-bundle\") pod \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.612515 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-repo-setup-combined-ca-bundle\") pod \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.612576 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-nova-combined-ca-bundle\") pod \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.612710 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-ovn-default-certs-0\") pod \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\" (UID: \"fc8c648b-1e0d-4b4b-b2f2-96e64441de99\") " Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.620176 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "fc8c648b-1e0d-4b4b-b2f2-96e64441de99" (UID: "fc8c648b-1e0d-4b4b-b2f2-96e64441de99"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.620911 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "fc8c648b-1e0d-4b4b-b2f2-96e64441de99" (UID: "fc8c648b-1e0d-4b4b-b2f2-96e64441de99"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.620979 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "fc8c648b-1e0d-4b4b-b2f2-96e64441de99" (UID: "fc8c648b-1e0d-4b4b-b2f2-96e64441de99"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.621002 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "fc8c648b-1e0d-4b4b-b2f2-96e64441de99" (UID: "fc8c648b-1e0d-4b4b-b2f2-96e64441de99"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.621161 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-kube-api-access-vlpx6" (OuterVolumeSpecName: "kube-api-access-vlpx6") pod "fc8c648b-1e0d-4b4b-b2f2-96e64441de99" (UID: "fc8c648b-1e0d-4b4b-b2f2-96e64441de99"). InnerVolumeSpecName "kube-api-access-vlpx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.621354 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "fc8c648b-1e0d-4b4b-b2f2-96e64441de99" (UID: "fc8c648b-1e0d-4b4b-b2f2-96e64441de99"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.621619 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "fc8c648b-1e0d-4b4b-b2f2-96e64441de99" (UID: "fc8c648b-1e0d-4b4b-b2f2-96e64441de99"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.622485 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "fc8c648b-1e0d-4b4b-b2f2-96e64441de99" (UID: "fc8c648b-1e0d-4b4b-b2f2-96e64441de99"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.623643 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "fc8c648b-1e0d-4b4b-b2f2-96e64441de99" (UID: "fc8c648b-1e0d-4b4b-b2f2-96e64441de99"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.623822 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "fc8c648b-1e0d-4b4b-b2f2-96e64441de99" (UID: "fc8c648b-1e0d-4b4b-b2f2-96e64441de99"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.624666 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ceph" (OuterVolumeSpecName: "ceph") pod "fc8c648b-1e0d-4b4b-b2f2-96e64441de99" (UID: "fc8c648b-1e0d-4b4b-b2f2-96e64441de99"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.642544 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fc8c648b-1e0d-4b4b-b2f2-96e64441de99" (UID: "fc8c648b-1e0d-4b4b-b2f2-96e64441de99"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.652865 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-inventory" (OuterVolumeSpecName: "inventory") pod "fc8c648b-1e0d-4b4b-b2f2-96e64441de99" (UID: "fc8c648b-1e0d-4b4b-b2f2-96e64441de99"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.714964 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.715003 5043 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.715014 5043 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.715023 5043 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.715032 5043 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.715040 5043 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.715055 5043 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.715074 5043 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.715085 5043 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.715096 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.715104 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlpx6\" (UniqueName: \"kubernetes.io/projected/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-kube-api-access-vlpx6\") on node \"crc\" DevicePath \"\"" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.715112 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:59:04 crc kubenswrapper[5043]: I1125 07:59:04.715120 5043 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc8c648b-1e0d-4b4b-b2f2-96e64441de99-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.030419 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" event={"ID":"fc8c648b-1e0d-4b4b-b2f2-96e64441de99","Type":"ContainerDied","Data":"d83e6744cb7a6ee7d7ea141072557210600870fd3fd6600813b48614768fd7d0"} Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.030913 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d83e6744cb7a6ee7d7ea141072557210600870fd3fd6600813b48614768fd7d0" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.030522 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.148377 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8"] Nov 25 07:59:05 crc kubenswrapper[5043]: E1125 07:59:05.148766 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8c648b-1e0d-4b4b-b2f2-96e64441de99" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.148792 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8c648b-1e0d-4b4b-b2f2-96e64441de99" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.149027 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc8c648b-1e0d-4b4b-b2f2-96e64441de99" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.149782 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.154728 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.155198 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.155417 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.155709 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.159776 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.167949 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8"] Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.326857 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8\" (UID: \"99201979-af70-4d71-8e55-23a89ab8c5ab\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.326919 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8\" (UID: \"99201979-af70-4d71-8e55-23a89ab8c5ab\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.326966 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brd4c\" (UniqueName: \"kubernetes.io/projected/99201979-af70-4d71-8e55-23a89ab8c5ab-kube-api-access-brd4c\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8\" (UID: \"99201979-af70-4d71-8e55-23a89ab8c5ab\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.327312 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8\" (UID: \"99201979-af70-4d71-8e55-23a89ab8c5ab\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.429916 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8\" (UID: \"99201979-af70-4d71-8e55-23a89ab8c5ab\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.430071 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8\" (UID: \"99201979-af70-4d71-8e55-23a89ab8c5ab\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.430134 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8\" (UID: \"99201979-af70-4d71-8e55-23a89ab8c5ab\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.430188 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brd4c\" (UniqueName: \"kubernetes.io/projected/99201979-af70-4d71-8e55-23a89ab8c5ab-kube-api-access-brd4c\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8\" (UID: \"99201979-af70-4d71-8e55-23a89ab8c5ab\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.436482 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8\" (UID: \"99201979-af70-4d71-8e55-23a89ab8c5ab\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.436599 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8\" (UID: \"99201979-af70-4d71-8e55-23a89ab8c5ab\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.437919 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8\" (UID: \"99201979-af70-4d71-8e55-23a89ab8c5ab\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.452047 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brd4c\" (UniqueName: \"kubernetes.io/projected/99201979-af70-4d71-8e55-23a89ab8c5ab-kube-api-access-brd4c\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8\" (UID: \"99201979-af70-4d71-8e55-23a89ab8c5ab\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" Nov 25 07:59:05 crc kubenswrapper[5043]: I1125 07:59:05.471291 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" Nov 25 07:59:06 crc kubenswrapper[5043]: I1125 07:59:06.063734 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8"] Nov 25 07:59:06 crc kubenswrapper[5043]: W1125 07:59:06.067876 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99201979_af70_4d71_8e55_23a89ab8c5ab.slice/crio-30401245a085c28a7c4f2572e446fe8708443d1a939ae8cae92be16c7c6247d8 WatchSource:0}: Error finding container 30401245a085c28a7c4f2572e446fe8708443d1a939ae8cae92be16c7c6247d8: Status 404 returned error can't find the container with id 30401245a085c28a7c4f2572e446fe8708443d1a939ae8cae92be16c7c6247d8 Nov 25 07:59:07 crc kubenswrapper[5043]: I1125 07:59:07.051529 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" event={"ID":"99201979-af70-4d71-8e55-23a89ab8c5ab","Type":"ContainerStarted","Data":"f31f54787907c4e4e00d9d4626e6f6157fb519c155d9286b908554189fcd4bc9"} Nov 25 07:59:07 crc kubenswrapper[5043]: I1125 07:59:07.051990 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" event={"ID":"99201979-af70-4d71-8e55-23a89ab8c5ab","Type":"ContainerStarted","Data":"30401245a085c28a7c4f2572e446fe8708443d1a939ae8cae92be16c7c6247d8"} Nov 25 07:59:13 crc kubenswrapper[5043]: I1125 07:59:13.112496 5043 generic.go:334] "Generic (PLEG): container finished" podID="99201979-af70-4d71-8e55-23a89ab8c5ab" containerID="f31f54787907c4e4e00d9d4626e6f6157fb519c155d9286b908554189fcd4bc9" exitCode=0 Nov 25 07:59:13 crc kubenswrapper[5043]: I1125 07:59:13.112699 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" event={"ID":"99201979-af70-4d71-8e55-23a89ab8c5ab","Type":"ContainerDied","Data":"f31f54787907c4e4e00d9d4626e6f6157fb519c155d9286b908554189fcd4bc9"} Nov 25 07:59:14 crc kubenswrapper[5043]: I1125 07:59:14.585238 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" Nov 25 07:59:14 crc kubenswrapper[5043]: I1125 07:59:14.687757 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-inventory\") pod \"99201979-af70-4d71-8e55-23a89ab8c5ab\" (UID: \"99201979-af70-4d71-8e55-23a89ab8c5ab\") " Nov 25 07:59:14 crc kubenswrapper[5043]: I1125 07:59:14.687821 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brd4c\" (UniqueName: \"kubernetes.io/projected/99201979-af70-4d71-8e55-23a89ab8c5ab-kube-api-access-brd4c\") pod \"99201979-af70-4d71-8e55-23a89ab8c5ab\" (UID: \"99201979-af70-4d71-8e55-23a89ab8c5ab\") " Nov 25 07:59:14 crc kubenswrapper[5043]: I1125 07:59:14.687880 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-ceph\") pod \"99201979-af70-4d71-8e55-23a89ab8c5ab\" (UID: \"99201979-af70-4d71-8e55-23a89ab8c5ab\") " Nov 25 07:59:14 crc kubenswrapper[5043]: I1125 07:59:14.688004 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-ssh-key\") pod \"99201979-af70-4d71-8e55-23a89ab8c5ab\" (UID: \"99201979-af70-4d71-8e55-23a89ab8c5ab\") " Nov 25 07:59:14 crc kubenswrapper[5043]: I1125 07:59:14.692880 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-ceph" (OuterVolumeSpecName: "ceph") pod "99201979-af70-4d71-8e55-23a89ab8c5ab" (UID: "99201979-af70-4d71-8e55-23a89ab8c5ab"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:59:14 crc kubenswrapper[5043]: I1125 07:59:14.694811 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99201979-af70-4d71-8e55-23a89ab8c5ab-kube-api-access-brd4c" (OuterVolumeSpecName: "kube-api-access-brd4c") pod "99201979-af70-4d71-8e55-23a89ab8c5ab" (UID: "99201979-af70-4d71-8e55-23a89ab8c5ab"). InnerVolumeSpecName "kube-api-access-brd4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 07:59:14 crc kubenswrapper[5043]: I1125 07:59:14.715833 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "99201979-af70-4d71-8e55-23a89ab8c5ab" (UID: "99201979-af70-4d71-8e55-23a89ab8c5ab"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:59:14 crc kubenswrapper[5043]: I1125 07:59:14.717139 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-inventory" (OuterVolumeSpecName: "inventory") pod "99201979-af70-4d71-8e55-23a89ab8c5ab" (UID: "99201979-af70-4d71-8e55-23a89ab8c5ab"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 07:59:14 crc kubenswrapper[5043]: I1125 07:59:14.790232 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 07:59:14 crc kubenswrapper[5043]: I1125 07:59:14.790267 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brd4c\" (UniqueName: \"kubernetes.io/projected/99201979-af70-4d71-8e55-23a89ab8c5ab-kube-api-access-brd4c\") on node \"crc\" DevicePath \"\"" Nov 25 07:59:14 crc kubenswrapper[5043]: I1125 07:59:14.790278 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 07:59:14 crc kubenswrapper[5043]: I1125 07:59:14.790286 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99201979-af70-4d71-8e55-23a89ab8c5ab-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.132126 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" event={"ID":"99201979-af70-4d71-8e55-23a89ab8c5ab","Type":"ContainerDied","Data":"30401245a085c28a7c4f2572e446fe8708443d1a939ae8cae92be16c7c6247d8"} Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.132172 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30401245a085c28a7c4f2572e446fe8708443d1a939ae8cae92be16c7c6247d8" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.132226 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.211756 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447"] Nov 25 07:59:15 crc kubenswrapper[5043]: E1125 07:59:15.212211 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99201979-af70-4d71-8e55-23a89ab8c5ab" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.212242 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="99201979-af70-4d71-8e55-23a89ab8c5ab" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.212541 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="99201979-af70-4d71-8e55-23a89ab8c5ab" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.213454 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.216559 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.216694 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.216831 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.216851 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.216887 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.217011 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.226412 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447"] Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.298699 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.299068 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.299193 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.299282 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d55dr\" (UniqueName: \"kubernetes.io/projected/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-kube-api-access-d55dr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.299330 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.299379 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.401353 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.401426 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.401505 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.401589 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d55dr\" (UniqueName: \"kubernetes.io/projected/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-kube-api-access-d55dr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.401669 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.401725 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.402828 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.409082 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.411532 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.412175 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.413730 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.419262 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d55dr\" (UniqueName: \"kubernetes.io/projected/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-kube-api-access-d55dr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ds447\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:15 crc kubenswrapper[5043]: I1125 07:59:15.528796 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 07:59:16 crc kubenswrapper[5043]: I1125 07:59:16.096453 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447"] Nov 25 07:59:16 crc kubenswrapper[5043]: I1125 07:59:16.143752 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" event={"ID":"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e","Type":"ContainerStarted","Data":"4be46041c83d048b7284ff7d4f18a3c9add1a523c1f6312511db9f94bc098beb"} Nov 25 07:59:18 crc kubenswrapper[5043]: I1125 07:59:18.161632 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" event={"ID":"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e","Type":"ContainerStarted","Data":"5f8070413367bb2d7090cdebd22633038512abb2b06ff1912741c13b24edcdbc"} Nov 25 07:59:18 crc kubenswrapper[5043]: I1125 07:59:18.187216 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" podStartSLOduration=1.754520863 podStartE2EDuration="3.187196595s" podCreationTimestamp="2025-11-25 07:59:15 +0000 UTC" firstStartedPulling="2025-11-25 07:59:16.100555146 +0000 UTC m=+2620.268750867" lastFinishedPulling="2025-11-25 07:59:17.533230878 +0000 UTC m=+2621.701426599" observedRunningTime="2025-11-25 07:59:18.18142924 +0000 UTC m=+2622.349624971" watchObservedRunningTime="2025-11-25 07:59:18.187196595 +0000 UTC m=+2622.355392316" Nov 25 08:00:00 crc kubenswrapper[5043]: I1125 08:00:00.150496 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw"] Nov 25 08:00:00 crc kubenswrapper[5043]: I1125 08:00:00.151969 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw" Nov 25 08:00:00 crc kubenswrapper[5043]: I1125 08:00:00.154336 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 08:00:00 crc kubenswrapper[5043]: I1125 08:00:00.154519 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 08:00:00 crc kubenswrapper[5043]: I1125 08:00:00.169472 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw"] Nov 25 08:00:00 crc kubenswrapper[5043]: I1125 08:00:00.227305 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n58dq\" (UniqueName: \"kubernetes.io/projected/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-kube-api-access-n58dq\") pod \"collect-profiles-29400960-lhszw\" (UID: \"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw" Nov 25 08:00:00 crc kubenswrapper[5043]: I1125 08:00:00.227406 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-config-volume\") pod \"collect-profiles-29400960-lhszw\" (UID: \"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw" Nov 25 08:00:00 crc kubenswrapper[5043]: I1125 08:00:00.227453 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-secret-volume\") pod \"collect-profiles-29400960-lhszw\" (UID: \"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw" Nov 25 08:00:00 crc kubenswrapper[5043]: I1125 08:00:00.328625 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-config-volume\") pod \"collect-profiles-29400960-lhszw\" (UID: \"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw" Nov 25 08:00:00 crc kubenswrapper[5043]: I1125 08:00:00.328716 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-secret-volume\") pod \"collect-profiles-29400960-lhszw\" (UID: \"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw" Nov 25 08:00:00 crc kubenswrapper[5043]: I1125 08:00:00.328788 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n58dq\" (UniqueName: \"kubernetes.io/projected/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-kube-api-access-n58dq\") pod \"collect-profiles-29400960-lhszw\" (UID: \"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw" Nov 25 08:00:00 crc kubenswrapper[5043]: I1125 08:00:00.330321 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-config-volume\") pod \"collect-profiles-29400960-lhszw\" (UID: \"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw" Nov 25 08:00:00 crc kubenswrapper[5043]: I1125 08:00:00.351491 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-secret-volume\") pod \"collect-profiles-29400960-lhszw\" (UID: \"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw" Nov 25 08:00:00 crc kubenswrapper[5043]: I1125 08:00:00.355761 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n58dq\" (UniqueName: \"kubernetes.io/projected/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-kube-api-access-n58dq\") pod \"collect-profiles-29400960-lhszw\" (UID: \"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw" Nov 25 08:00:00 crc kubenswrapper[5043]: I1125 08:00:00.477622 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw" Nov 25 08:00:00 crc kubenswrapper[5043]: I1125 08:00:00.947520 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw"] Nov 25 08:00:01 crc kubenswrapper[5043]: I1125 08:00:01.548205 5043 generic.go:334] "Generic (PLEG): container finished" podID="ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a" containerID="414d33a6246e703eeeb2cdcb402b010cc39a8bd06ee3586116c3ddcc34b6e5b1" exitCode=0 Nov 25 08:00:01 crc kubenswrapper[5043]: I1125 08:00:01.548544 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw" event={"ID":"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a","Type":"ContainerDied","Data":"414d33a6246e703eeeb2cdcb402b010cc39a8bd06ee3586116c3ddcc34b6e5b1"} Nov 25 08:00:01 crc kubenswrapper[5043]: I1125 08:00:01.548595 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw" event={"ID":"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a","Type":"ContainerStarted","Data":"ed021efd8031d7f5239dcc72d8f732ed67458676dae8748a68aeda4341af4a54"} Nov 25 08:00:02 crc kubenswrapper[5043]: I1125 08:00:02.874457 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw" Nov 25 08:00:02 crc kubenswrapper[5043]: I1125 08:00:02.976168 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n58dq\" (UniqueName: \"kubernetes.io/projected/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-kube-api-access-n58dq\") pod \"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a\" (UID: \"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a\") " Nov 25 08:00:02 crc kubenswrapper[5043]: I1125 08:00:02.976656 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-config-volume\") pod \"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a\" (UID: \"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a\") " Nov 25 08:00:02 crc kubenswrapper[5043]: I1125 08:00:02.976752 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-secret-volume\") pod \"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a\" (UID: \"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a\") " Nov 25 08:00:02 crc kubenswrapper[5043]: I1125 08:00:02.977392 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-config-volume" (OuterVolumeSpecName: "config-volume") pod "ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a" (UID: "ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:00:02 crc kubenswrapper[5043]: I1125 08:00:02.977513 5043 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:00:02 crc kubenswrapper[5043]: I1125 08:00:02.982755 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a" (UID: "ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:00:02 crc kubenswrapper[5043]: I1125 08:00:02.982828 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-kube-api-access-n58dq" (OuterVolumeSpecName: "kube-api-access-n58dq") pod "ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a" (UID: "ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a"). InnerVolumeSpecName "kube-api-access-n58dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:00:03 crc kubenswrapper[5043]: I1125 08:00:03.079415 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n58dq\" (UniqueName: \"kubernetes.io/projected/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-kube-api-access-n58dq\") on node \"crc\" DevicePath \"\"" Nov 25 08:00:03 crc kubenswrapper[5043]: I1125 08:00:03.079691 5043 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:00:03 crc kubenswrapper[5043]: I1125 08:00:03.567129 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw" event={"ID":"ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a","Type":"ContainerDied","Data":"ed021efd8031d7f5239dcc72d8f732ed67458676dae8748a68aeda4341af4a54"} Nov 25 08:00:03 crc kubenswrapper[5043]: I1125 08:00:03.567176 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed021efd8031d7f5239dcc72d8f732ed67458676dae8748a68aeda4341af4a54" Nov 25 08:00:03 crc kubenswrapper[5043]: I1125 08:00:03.567586 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw" Nov 25 08:00:03 crc kubenswrapper[5043]: I1125 08:00:03.949852 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl"] Nov 25 08:00:03 crc kubenswrapper[5043]: I1125 08:00:03.956871 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400915-zd9vl"] Nov 25 08:00:04 crc kubenswrapper[5043]: I1125 08:00:04.975512 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="576eeef9-fcf9-4db0-a0cc-4083e03277f6" path="/var/lib/kubelet/pods/576eeef9-fcf9-4db0-a0cc-4083e03277f6/volumes" Nov 25 08:00:17 crc kubenswrapper[5043]: I1125 08:00:17.276820 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:00:17 crc kubenswrapper[5043]: I1125 08:00:17.277423 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:00:38 crc kubenswrapper[5043]: I1125 08:00:38.917116 5043 generic.go:334] "Generic (PLEG): container finished" podID="cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e" containerID="5f8070413367bb2d7090cdebd22633038512abb2b06ff1912741c13b24edcdbc" exitCode=0 Nov 25 08:00:38 crc kubenswrapper[5043]: I1125 08:00:38.917199 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" event={"ID":"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e","Type":"ContainerDied","Data":"5f8070413367bb2d7090cdebd22633038512abb2b06ff1912741c13b24edcdbc"} Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.336152 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.363246 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ssh-key\") pod \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.363381 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-inventory\") pod \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.363493 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ceph\") pod \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.363535 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ovncontroller-config-0\") pod \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.363597 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ovn-combined-ca-bundle\") pod \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.363663 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d55dr\" (UniqueName: \"kubernetes.io/projected/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-kube-api-access-d55dr\") pod \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\" (UID: \"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e\") " Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.377290 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e" (UID: "cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.377687 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ceph" (OuterVolumeSpecName: "ceph") pod "cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e" (UID: "cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.383699 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-kube-api-access-d55dr" (OuterVolumeSpecName: "kube-api-access-d55dr") pod "cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e" (UID: "cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e"). InnerVolumeSpecName "kube-api-access-d55dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.397571 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e" (UID: "cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.401640 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e" (UID: "cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.413337 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-inventory" (OuterVolumeSpecName: "inventory") pod "cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e" (UID: "cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.465422 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.465459 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.465472 5043 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.465482 5043 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.465491 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d55dr\" (UniqueName: \"kubernetes.io/projected/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-kube-api-access-d55dr\") on node \"crc\" DevicePath \"\"" Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.465502 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.939983 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.939979 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ds447" event={"ID":"cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e","Type":"ContainerDied","Data":"4be46041c83d048b7284ff7d4f18a3c9add1a523c1f6312511db9f94bc098beb"} Nov 25 08:00:40 crc kubenswrapper[5043]: I1125 08:00:40.940154 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4be46041c83d048b7284ff7d4f18a3c9add1a523c1f6312511db9f94bc098beb" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.034112 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz"] Nov 25 08:00:41 crc kubenswrapper[5043]: E1125 08:00:41.034480 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.034505 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 08:00:41 crc kubenswrapper[5043]: E1125 08:00:41.034528 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a" containerName="collect-profiles" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.034537 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a" containerName="collect-profiles" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.034769 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a" containerName="collect-profiles" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.034791 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.035519 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.037416 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.037416 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.037683 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.038406 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.039960 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.040027 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.049068 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.049409 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz"] Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.100694 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.100900 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.101126 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wgtc\" (UniqueName: \"kubernetes.io/projected/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-kube-api-access-7wgtc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.101236 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.101567 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.101719 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.101788 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.203737 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wgtc\" (UniqueName: \"kubernetes.io/projected/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-kube-api-access-7wgtc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.203792 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.203835 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.203873 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.203894 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.203950 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.203993 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.207467 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.208120 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.208361 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.208672 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.209790 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.210128 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.223710 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wgtc\" (UniqueName: \"kubernetes.io/projected/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-kube-api-access-7wgtc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.413576 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.958517 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz"] Nov 25 08:00:41 crc kubenswrapper[5043]: I1125 08:00:41.970183 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:00:42 crc kubenswrapper[5043]: I1125 08:00:42.781841 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x5nvb"] Nov 25 08:00:42 crc kubenswrapper[5043]: I1125 08:00:42.784700 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5nvb" Nov 25 08:00:42 crc kubenswrapper[5043]: I1125 08:00:42.789758 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x5nvb"] Nov 25 08:00:42 crc kubenswrapper[5043]: I1125 08:00:42.836183 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-utilities\") pod \"community-operators-x5nvb\" (UID: \"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042\") " pod="openshift-marketplace/community-operators-x5nvb" Nov 25 08:00:42 crc kubenswrapper[5043]: I1125 08:00:42.836267 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-catalog-content\") pod \"community-operators-x5nvb\" (UID: \"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042\") " pod="openshift-marketplace/community-operators-x5nvb" Nov 25 08:00:42 crc kubenswrapper[5043]: I1125 08:00:42.836344 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5jw4\" (UniqueName: \"kubernetes.io/projected/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-kube-api-access-g5jw4\") pod \"community-operators-x5nvb\" (UID: \"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042\") " pod="openshift-marketplace/community-operators-x5nvb" Nov 25 08:00:42 crc kubenswrapper[5043]: I1125 08:00:42.937245 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-utilities\") pod \"community-operators-x5nvb\" (UID: \"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042\") " pod="openshift-marketplace/community-operators-x5nvb" Nov 25 08:00:42 crc kubenswrapper[5043]: I1125 08:00:42.937325 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-catalog-content\") pod \"community-operators-x5nvb\" (UID: \"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042\") " pod="openshift-marketplace/community-operators-x5nvb" Nov 25 08:00:42 crc kubenswrapper[5043]: I1125 08:00:42.937345 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5jw4\" (UniqueName: \"kubernetes.io/projected/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-kube-api-access-g5jw4\") pod \"community-operators-x5nvb\" (UID: \"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042\") " pod="openshift-marketplace/community-operators-x5nvb" Nov 25 08:00:42 crc kubenswrapper[5043]: I1125 08:00:42.939261 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-utilities\") pod \"community-operators-x5nvb\" (UID: \"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042\") " pod="openshift-marketplace/community-operators-x5nvb" Nov 25 08:00:42 crc kubenswrapper[5043]: I1125 08:00:42.939291 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-catalog-content\") pod \"community-operators-x5nvb\" (UID: \"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042\") " pod="openshift-marketplace/community-operators-x5nvb" Nov 25 08:00:42 crc kubenswrapper[5043]: I1125 08:00:42.954664 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5jw4\" (UniqueName: \"kubernetes.io/projected/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-kube-api-access-g5jw4\") pod \"community-operators-x5nvb\" (UID: \"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042\") " pod="openshift-marketplace/community-operators-x5nvb" Nov 25 08:00:42 crc kubenswrapper[5043]: I1125 08:00:42.979920 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" event={"ID":"3071ef74-1c72-4b4c-90e7-fee9dc8332e5","Type":"ContainerStarted","Data":"1a7aed6b7c180db6c05be24c5ac3543af05dd464b12c86cf6473dc06fcd16343"} Nov 25 08:00:42 crc kubenswrapper[5043]: I1125 08:00:42.979964 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" event={"ID":"3071ef74-1c72-4b4c-90e7-fee9dc8332e5","Type":"ContainerStarted","Data":"5c7cdad471b8ac9819091da610def16df0afa560a2bef083b531147f97ac4f27"} Nov 25 08:00:42 crc kubenswrapper[5043]: I1125 08:00:42.983253 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" podStartSLOduration=1.5073511389999998 podStartE2EDuration="1.983235462s" podCreationTimestamp="2025-11-25 08:00:41 +0000 UTC" firstStartedPulling="2025-11-25 08:00:41.969738799 +0000 UTC m=+2706.137934520" lastFinishedPulling="2025-11-25 08:00:42.445623112 +0000 UTC m=+2706.613818843" observedRunningTime="2025-11-25 08:00:42.979730898 +0000 UTC m=+2707.147926629" watchObservedRunningTime="2025-11-25 08:00:42.983235462 +0000 UTC m=+2707.151431183" Nov 25 08:00:43 crc kubenswrapper[5043]: I1125 08:00:43.115899 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5nvb" Nov 25 08:00:43 crc kubenswrapper[5043]: I1125 08:00:43.679897 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x5nvb"] Nov 25 08:00:43 crc kubenswrapper[5043]: I1125 08:00:43.975404 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5nvb" event={"ID":"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042","Type":"ContainerStarted","Data":"a89ac784db4d8a9e71c19314e3db8524c88d8129ee39f0af65b5bdc18bcddfdf"} Nov 25 08:00:43 crc kubenswrapper[5043]: I1125 08:00:43.976045 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5nvb" event={"ID":"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042","Type":"ContainerStarted","Data":"dd3a1d13978c34ba42d310cda6f1d91938518d68cba75d97c94041ebbc1684e9"} Nov 25 08:00:44 crc kubenswrapper[5043]: I1125 08:00:44.982453 5043 generic.go:334] "Generic (PLEG): container finished" podID="8d61d3ae-ec39-4a3d-a0af-cc932ce8e042" containerID="a89ac784db4d8a9e71c19314e3db8524c88d8129ee39f0af65b5bdc18bcddfdf" exitCode=0 Nov 25 08:00:44 crc kubenswrapper[5043]: I1125 08:00:44.982524 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5nvb" event={"ID":"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042","Type":"ContainerDied","Data":"a89ac784db4d8a9e71c19314e3db8524c88d8129ee39f0af65b5bdc18bcddfdf"} Nov 25 08:00:47 crc kubenswrapper[5043]: I1125 08:00:46.999908 5043 generic.go:334] "Generic (PLEG): container finished" podID="8d61d3ae-ec39-4a3d-a0af-cc932ce8e042" containerID="3e3be9fda9c5d04cf6915cfaa75b3c42f6173505c394bfe9c54b702c528a34e2" exitCode=0 Nov 25 08:00:47 crc kubenswrapper[5043]: I1125 08:00:46.999993 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5nvb" event={"ID":"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042","Type":"ContainerDied","Data":"3e3be9fda9c5d04cf6915cfaa75b3c42f6173505c394bfe9c54b702c528a34e2"} Nov 25 08:00:47 crc kubenswrapper[5043]: I1125 08:00:47.276196 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:00:47 crc kubenswrapper[5043]: I1125 08:00:47.276721 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:00:49 crc kubenswrapper[5043]: I1125 08:00:49.033530 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5nvb" event={"ID":"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042","Type":"ContainerStarted","Data":"e23e23070a0ae98f82c1fe78b7852ef7514908ab76447f04426047b2a4d5f496"} Nov 25 08:00:49 crc kubenswrapper[5043]: I1125 08:00:49.067950 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x5nvb" podStartSLOduration=3.915272851 podStartE2EDuration="7.067924144s" podCreationTimestamp="2025-11-25 08:00:42 +0000 UTC" firstStartedPulling="2025-11-25 08:00:44.984093138 +0000 UTC m=+2709.152288859" lastFinishedPulling="2025-11-25 08:00:48.136744431 +0000 UTC m=+2712.304940152" observedRunningTime="2025-11-25 08:00:49.062076807 +0000 UTC m=+2713.230272538" watchObservedRunningTime="2025-11-25 08:00:49.067924144 +0000 UTC m=+2713.236119865" Nov 25 08:00:52 crc kubenswrapper[5043]: I1125 08:00:52.135682 5043 scope.go:117] "RemoveContainer" containerID="e51fddbfd0a9b8cb2d63a12d137aceda2471ab2b859c1b5387bf24f9a2713be1" Nov 25 08:00:53 crc kubenswrapper[5043]: I1125 08:00:53.116548 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x5nvb" Nov 25 08:00:53 crc kubenswrapper[5043]: I1125 08:00:53.116640 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x5nvb" Nov 25 08:00:53 crc kubenswrapper[5043]: I1125 08:00:53.164884 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x5nvb" Nov 25 08:00:54 crc kubenswrapper[5043]: I1125 08:00:54.137805 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x5nvb" Nov 25 08:00:54 crc kubenswrapper[5043]: I1125 08:00:54.210685 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x5nvb"] Nov 25 08:00:56 crc kubenswrapper[5043]: I1125 08:00:56.096529 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x5nvb" podUID="8d61d3ae-ec39-4a3d-a0af-cc932ce8e042" containerName="registry-server" containerID="cri-o://e23e23070a0ae98f82c1fe78b7852ef7514908ab76447f04426047b2a4d5f496" gracePeriod=2 Nov 25 08:00:56 crc kubenswrapper[5043]: I1125 08:00:56.572453 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5nvb" Nov 25 08:00:56 crc kubenswrapper[5043]: I1125 08:00:56.713061 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5jw4\" (UniqueName: \"kubernetes.io/projected/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-kube-api-access-g5jw4\") pod \"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042\" (UID: \"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042\") " Nov 25 08:00:56 crc kubenswrapper[5043]: I1125 08:00:56.713210 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-catalog-content\") pod \"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042\" (UID: \"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042\") " Nov 25 08:00:56 crc kubenswrapper[5043]: I1125 08:00:56.713356 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-utilities\") pod \"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042\" (UID: \"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042\") " Nov 25 08:00:56 crc kubenswrapper[5043]: I1125 08:00:56.714362 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-utilities" (OuterVolumeSpecName: "utilities") pod "8d61d3ae-ec39-4a3d-a0af-cc932ce8e042" (UID: "8d61d3ae-ec39-4a3d-a0af-cc932ce8e042"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:00:56 crc kubenswrapper[5043]: I1125 08:00:56.722724 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-kube-api-access-g5jw4" (OuterVolumeSpecName: "kube-api-access-g5jw4") pod "8d61d3ae-ec39-4a3d-a0af-cc932ce8e042" (UID: "8d61d3ae-ec39-4a3d-a0af-cc932ce8e042"). InnerVolumeSpecName "kube-api-access-g5jw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:00:56 crc kubenswrapper[5043]: I1125 08:00:56.767466 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d61d3ae-ec39-4a3d-a0af-cc932ce8e042" (UID: "8d61d3ae-ec39-4a3d-a0af-cc932ce8e042"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:00:56 crc kubenswrapper[5043]: I1125 08:00:56.815006 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:00:56 crc kubenswrapper[5043]: I1125 08:00:56.815040 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5jw4\" (UniqueName: \"kubernetes.io/projected/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-kube-api-access-g5jw4\") on node \"crc\" DevicePath \"\"" Nov 25 08:00:56 crc kubenswrapper[5043]: I1125 08:00:56.815051 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:00:57 crc kubenswrapper[5043]: I1125 08:00:57.109375 5043 generic.go:334] "Generic (PLEG): container finished" podID="8d61d3ae-ec39-4a3d-a0af-cc932ce8e042" containerID="e23e23070a0ae98f82c1fe78b7852ef7514908ab76447f04426047b2a4d5f496" exitCode=0 Nov 25 08:00:57 crc kubenswrapper[5043]: I1125 08:00:57.109421 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5nvb" event={"ID":"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042","Type":"ContainerDied","Data":"e23e23070a0ae98f82c1fe78b7852ef7514908ab76447f04426047b2a4d5f496"} Nov 25 08:00:57 crc kubenswrapper[5043]: I1125 08:00:57.109454 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x5nvb" Nov 25 08:00:57 crc kubenswrapper[5043]: I1125 08:00:57.109483 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x5nvb" event={"ID":"8d61d3ae-ec39-4a3d-a0af-cc932ce8e042","Type":"ContainerDied","Data":"dd3a1d13978c34ba42d310cda6f1d91938518d68cba75d97c94041ebbc1684e9"} Nov 25 08:00:57 crc kubenswrapper[5043]: I1125 08:00:57.109512 5043 scope.go:117] "RemoveContainer" containerID="e23e23070a0ae98f82c1fe78b7852ef7514908ab76447f04426047b2a4d5f496" Nov 25 08:00:57 crc kubenswrapper[5043]: I1125 08:00:57.136692 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x5nvb"] Nov 25 08:00:57 crc kubenswrapper[5043]: I1125 08:00:57.144275 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x5nvb"] Nov 25 08:00:57 crc kubenswrapper[5043]: I1125 08:00:57.145321 5043 scope.go:117] "RemoveContainer" containerID="3e3be9fda9c5d04cf6915cfaa75b3c42f6173505c394bfe9c54b702c528a34e2" Nov 25 08:00:57 crc kubenswrapper[5043]: I1125 08:00:57.165325 5043 scope.go:117] "RemoveContainer" containerID="a89ac784db4d8a9e71c19314e3db8524c88d8129ee39f0af65b5bdc18bcddfdf" Nov 25 08:00:57 crc kubenswrapper[5043]: I1125 08:00:57.212542 5043 scope.go:117] "RemoveContainer" containerID="e23e23070a0ae98f82c1fe78b7852ef7514908ab76447f04426047b2a4d5f496" Nov 25 08:00:57 crc kubenswrapper[5043]: E1125 08:00:57.213001 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e23e23070a0ae98f82c1fe78b7852ef7514908ab76447f04426047b2a4d5f496\": container with ID starting with e23e23070a0ae98f82c1fe78b7852ef7514908ab76447f04426047b2a4d5f496 not found: ID does not exist" containerID="e23e23070a0ae98f82c1fe78b7852ef7514908ab76447f04426047b2a4d5f496" Nov 25 08:00:57 crc kubenswrapper[5043]: I1125 08:00:57.213130 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23e23070a0ae98f82c1fe78b7852ef7514908ab76447f04426047b2a4d5f496"} err="failed to get container status \"e23e23070a0ae98f82c1fe78b7852ef7514908ab76447f04426047b2a4d5f496\": rpc error: code = NotFound desc = could not find container \"e23e23070a0ae98f82c1fe78b7852ef7514908ab76447f04426047b2a4d5f496\": container with ID starting with e23e23070a0ae98f82c1fe78b7852ef7514908ab76447f04426047b2a4d5f496 not found: ID does not exist" Nov 25 08:00:57 crc kubenswrapper[5043]: I1125 08:00:57.213213 5043 scope.go:117] "RemoveContainer" containerID="3e3be9fda9c5d04cf6915cfaa75b3c42f6173505c394bfe9c54b702c528a34e2" Nov 25 08:00:57 crc kubenswrapper[5043]: E1125 08:00:57.213512 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e3be9fda9c5d04cf6915cfaa75b3c42f6173505c394bfe9c54b702c528a34e2\": container with ID starting with 3e3be9fda9c5d04cf6915cfaa75b3c42f6173505c394bfe9c54b702c528a34e2 not found: ID does not exist" containerID="3e3be9fda9c5d04cf6915cfaa75b3c42f6173505c394bfe9c54b702c528a34e2" Nov 25 08:00:57 crc kubenswrapper[5043]: I1125 08:00:57.213554 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e3be9fda9c5d04cf6915cfaa75b3c42f6173505c394bfe9c54b702c528a34e2"} err="failed to get container status \"3e3be9fda9c5d04cf6915cfaa75b3c42f6173505c394bfe9c54b702c528a34e2\": rpc error: code = NotFound desc = could not find container \"3e3be9fda9c5d04cf6915cfaa75b3c42f6173505c394bfe9c54b702c528a34e2\": container with ID starting with 3e3be9fda9c5d04cf6915cfaa75b3c42f6173505c394bfe9c54b702c528a34e2 not found: ID does not exist" Nov 25 08:00:57 crc kubenswrapper[5043]: I1125 08:00:57.213586 5043 scope.go:117] "RemoveContainer" containerID="a89ac784db4d8a9e71c19314e3db8524c88d8129ee39f0af65b5bdc18bcddfdf" Nov 25 08:00:57 crc kubenswrapper[5043]: E1125 08:00:57.213974 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a89ac784db4d8a9e71c19314e3db8524c88d8129ee39f0af65b5bdc18bcddfdf\": container with ID starting with a89ac784db4d8a9e71c19314e3db8524c88d8129ee39f0af65b5bdc18bcddfdf not found: ID does not exist" containerID="a89ac784db4d8a9e71c19314e3db8524c88d8129ee39f0af65b5bdc18bcddfdf" Nov 25 08:00:57 crc kubenswrapper[5043]: I1125 08:00:57.214052 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a89ac784db4d8a9e71c19314e3db8524c88d8129ee39f0af65b5bdc18bcddfdf"} err="failed to get container status \"a89ac784db4d8a9e71c19314e3db8524c88d8129ee39f0af65b5bdc18bcddfdf\": rpc error: code = NotFound desc = could not find container \"a89ac784db4d8a9e71c19314e3db8524c88d8129ee39f0af65b5bdc18bcddfdf\": container with ID starting with a89ac784db4d8a9e71c19314e3db8524c88d8129ee39f0af65b5bdc18bcddfdf not found: ID does not exist" Nov 25 08:00:58 crc kubenswrapper[5043]: I1125 08:00:58.979402 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d61d3ae-ec39-4a3d-a0af-cc932ce8e042" path="/var/lib/kubelet/pods/8d61d3ae-ec39-4a3d-a0af-cc932ce8e042/volumes" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.149621 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29400961-6qx6s"] Nov 25 08:01:00 crc kubenswrapper[5043]: E1125 08:01:00.150070 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d61d3ae-ec39-4a3d-a0af-cc932ce8e042" containerName="extract-utilities" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.150087 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d61d3ae-ec39-4a3d-a0af-cc932ce8e042" containerName="extract-utilities" Nov 25 08:01:00 crc kubenswrapper[5043]: E1125 08:01:00.150100 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d61d3ae-ec39-4a3d-a0af-cc932ce8e042" containerName="registry-server" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.150110 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d61d3ae-ec39-4a3d-a0af-cc932ce8e042" containerName="registry-server" Nov 25 08:01:00 crc kubenswrapper[5043]: E1125 08:01:00.150136 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d61d3ae-ec39-4a3d-a0af-cc932ce8e042" containerName="extract-content" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.150144 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d61d3ae-ec39-4a3d-a0af-cc932ce8e042" containerName="extract-content" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.150380 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d61d3ae-ec39-4a3d-a0af-cc932ce8e042" containerName="registry-server" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.151122 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400961-6qx6s" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.161162 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400961-6qx6s"] Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.275703 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-combined-ca-bundle\") pod \"keystone-cron-29400961-6qx6s\" (UID: \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\") " pod="openstack/keystone-cron-29400961-6qx6s" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.275782 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-config-data\") pod \"keystone-cron-29400961-6qx6s\" (UID: \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\") " pod="openstack/keystone-cron-29400961-6qx6s" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.275833 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stkxx\" (UniqueName: \"kubernetes.io/projected/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-kube-api-access-stkxx\") pod \"keystone-cron-29400961-6qx6s\" (UID: \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\") " pod="openstack/keystone-cron-29400961-6qx6s" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.275907 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-fernet-keys\") pod \"keystone-cron-29400961-6qx6s\" (UID: \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\") " pod="openstack/keystone-cron-29400961-6qx6s" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.377979 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stkxx\" (UniqueName: \"kubernetes.io/projected/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-kube-api-access-stkxx\") pod \"keystone-cron-29400961-6qx6s\" (UID: \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\") " pod="openstack/keystone-cron-29400961-6qx6s" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.378045 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-fernet-keys\") pod \"keystone-cron-29400961-6qx6s\" (UID: \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\") " pod="openstack/keystone-cron-29400961-6qx6s" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.378163 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-combined-ca-bundle\") pod \"keystone-cron-29400961-6qx6s\" (UID: \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\") " pod="openstack/keystone-cron-29400961-6qx6s" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.378196 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-config-data\") pod \"keystone-cron-29400961-6qx6s\" (UID: \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\") " pod="openstack/keystone-cron-29400961-6qx6s" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.386076 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-combined-ca-bundle\") pod \"keystone-cron-29400961-6qx6s\" (UID: \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\") " pod="openstack/keystone-cron-29400961-6qx6s" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.386140 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-fernet-keys\") pod \"keystone-cron-29400961-6qx6s\" (UID: \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\") " pod="openstack/keystone-cron-29400961-6qx6s" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.389032 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-config-data\") pod \"keystone-cron-29400961-6qx6s\" (UID: \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\") " pod="openstack/keystone-cron-29400961-6qx6s" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.396538 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stkxx\" (UniqueName: \"kubernetes.io/projected/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-kube-api-access-stkxx\") pod \"keystone-cron-29400961-6qx6s\" (UID: \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\") " pod="openstack/keystone-cron-29400961-6qx6s" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.468671 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400961-6qx6s" Nov 25 08:01:00 crc kubenswrapper[5043]: I1125 08:01:00.883066 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400961-6qx6s"] Nov 25 08:01:00 crc kubenswrapper[5043]: W1125 08:01:00.884468 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8f9ccd0_dda3_4d6b_8d31_7ecf76c5dfb4.slice/crio-cd9856e2e610bdbaf01149b30ea5af7a0dd3a9507c9969fa7357bd83be3fabf2 WatchSource:0}: Error finding container cd9856e2e610bdbaf01149b30ea5af7a0dd3a9507c9969fa7357bd83be3fabf2: Status 404 returned error can't find the container with id cd9856e2e610bdbaf01149b30ea5af7a0dd3a9507c9969fa7357bd83be3fabf2 Nov 25 08:01:01 crc kubenswrapper[5043]: I1125 08:01:01.148988 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400961-6qx6s" event={"ID":"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4","Type":"ContainerStarted","Data":"175cfb2d939c730379a4cf1261793dbf80fe3fdb94d210e4753d812e31e98384"} Nov 25 08:01:01 crc kubenswrapper[5043]: I1125 08:01:01.149046 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400961-6qx6s" event={"ID":"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4","Type":"ContainerStarted","Data":"cd9856e2e610bdbaf01149b30ea5af7a0dd3a9507c9969fa7357bd83be3fabf2"} Nov 25 08:01:01 crc kubenswrapper[5043]: I1125 08:01:01.166235 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29400961-6qx6s" podStartSLOduration=1.166213607 podStartE2EDuration="1.166213607s" podCreationTimestamp="2025-11-25 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:01:01.1644566 +0000 UTC m=+2725.332652321" watchObservedRunningTime="2025-11-25 08:01:01.166213607 +0000 UTC m=+2725.334409318" Nov 25 08:01:02 crc kubenswrapper[5043]: I1125 08:01:02.527191 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jff2w"] Nov 25 08:01:02 crc kubenswrapper[5043]: I1125 08:01:02.531167 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jff2w" Nov 25 08:01:02 crc kubenswrapper[5043]: I1125 08:01:02.544133 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jff2w"] Nov 25 08:01:02 crc kubenswrapper[5043]: I1125 08:01:02.619959 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lhrx\" (UniqueName: \"kubernetes.io/projected/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-kube-api-access-6lhrx\") pod \"redhat-operators-jff2w\" (UID: \"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f\") " pod="openshift-marketplace/redhat-operators-jff2w" Nov 25 08:01:02 crc kubenswrapper[5043]: I1125 08:01:02.620090 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-utilities\") pod \"redhat-operators-jff2w\" (UID: \"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f\") " pod="openshift-marketplace/redhat-operators-jff2w" Nov 25 08:01:02 crc kubenswrapper[5043]: I1125 08:01:02.620180 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-catalog-content\") pod \"redhat-operators-jff2w\" (UID: \"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f\") " pod="openshift-marketplace/redhat-operators-jff2w" Nov 25 08:01:02 crc kubenswrapper[5043]: I1125 08:01:02.721380 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lhrx\" (UniqueName: \"kubernetes.io/projected/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-kube-api-access-6lhrx\") pod \"redhat-operators-jff2w\" (UID: \"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f\") " pod="openshift-marketplace/redhat-operators-jff2w" Nov 25 08:01:02 crc kubenswrapper[5043]: I1125 08:01:02.721497 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-utilities\") pod \"redhat-operators-jff2w\" (UID: \"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f\") " pod="openshift-marketplace/redhat-operators-jff2w" Nov 25 08:01:02 crc kubenswrapper[5043]: I1125 08:01:02.721572 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-catalog-content\") pod \"redhat-operators-jff2w\" (UID: \"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f\") " pod="openshift-marketplace/redhat-operators-jff2w" Nov 25 08:01:02 crc kubenswrapper[5043]: I1125 08:01:02.721961 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-utilities\") pod \"redhat-operators-jff2w\" (UID: \"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f\") " pod="openshift-marketplace/redhat-operators-jff2w" Nov 25 08:01:02 crc kubenswrapper[5043]: I1125 08:01:02.721977 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-catalog-content\") pod \"redhat-operators-jff2w\" (UID: \"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f\") " pod="openshift-marketplace/redhat-operators-jff2w" Nov 25 08:01:02 crc kubenswrapper[5043]: I1125 08:01:02.744644 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lhrx\" (UniqueName: \"kubernetes.io/projected/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-kube-api-access-6lhrx\") pod \"redhat-operators-jff2w\" (UID: \"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f\") " pod="openshift-marketplace/redhat-operators-jff2w" Nov 25 08:01:02 crc kubenswrapper[5043]: I1125 08:01:02.860518 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jff2w" Nov 25 08:01:03 crc kubenswrapper[5043]: W1125 08:01:03.315354 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9642c79a_0f4e_46e8_bdc4_e9b136d27d2f.slice/crio-d2934e7490f9dcfa67511e8cd8a0bdc08dfbea7f2f50a3831ee5b720aaebac9c WatchSource:0}: Error finding container d2934e7490f9dcfa67511e8cd8a0bdc08dfbea7f2f50a3831ee5b720aaebac9c: Status 404 returned error can't find the container with id d2934e7490f9dcfa67511e8cd8a0bdc08dfbea7f2f50a3831ee5b720aaebac9c Nov 25 08:01:03 crc kubenswrapper[5043]: I1125 08:01:03.322667 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jff2w"] Nov 25 08:01:04 crc kubenswrapper[5043]: I1125 08:01:04.172780 5043 generic.go:334] "Generic (PLEG): container finished" podID="9642c79a-0f4e-46e8-bdc4-e9b136d27d2f" containerID="ef9d491453b76c7462b7f41f0104c24112cb62452e186f76f824d5a6c67646bf" exitCode=0 Nov 25 08:01:04 crc kubenswrapper[5043]: I1125 08:01:04.172914 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jff2w" event={"ID":"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f","Type":"ContainerDied","Data":"ef9d491453b76c7462b7f41f0104c24112cb62452e186f76f824d5a6c67646bf"} Nov 25 08:01:04 crc kubenswrapper[5043]: I1125 08:01:04.173512 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jff2w" event={"ID":"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f","Type":"ContainerStarted","Data":"d2934e7490f9dcfa67511e8cd8a0bdc08dfbea7f2f50a3831ee5b720aaebac9c"} Nov 25 08:01:04 crc kubenswrapper[5043]: I1125 08:01:04.175373 5043 generic.go:334] "Generic (PLEG): container finished" podID="e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4" containerID="175cfb2d939c730379a4cf1261793dbf80fe3fdb94d210e4753d812e31e98384" exitCode=0 Nov 25 08:01:04 crc kubenswrapper[5043]: I1125 08:01:04.175415 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400961-6qx6s" event={"ID":"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4","Type":"ContainerDied","Data":"175cfb2d939c730379a4cf1261793dbf80fe3fdb94d210e4753d812e31e98384"} Nov 25 08:01:05 crc kubenswrapper[5043]: I1125 08:01:05.186095 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jff2w" event={"ID":"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f","Type":"ContainerStarted","Data":"8ee3ee5430b7e875617a4da60a3235cd0b0041ef70c6f13c6a8faed5c4a1ec3e"} Nov 25 08:01:05 crc kubenswrapper[5043]: I1125 08:01:05.521715 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400961-6qx6s" Nov 25 08:01:05 crc kubenswrapper[5043]: I1125 08:01:05.674469 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stkxx\" (UniqueName: \"kubernetes.io/projected/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-kube-api-access-stkxx\") pod \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\" (UID: \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\") " Nov 25 08:01:05 crc kubenswrapper[5043]: I1125 08:01:05.674528 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-fernet-keys\") pod \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\" (UID: \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\") " Nov 25 08:01:05 crc kubenswrapper[5043]: I1125 08:01:05.674685 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-config-data\") pod \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\" (UID: \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\") " Nov 25 08:01:05 crc kubenswrapper[5043]: I1125 08:01:05.674708 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-combined-ca-bundle\") pod \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\" (UID: \"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4\") " Nov 25 08:01:05 crc kubenswrapper[5043]: I1125 08:01:05.680994 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4" (UID: "e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:01:05 crc kubenswrapper[5043]: I1125 08:01:05.683001 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-kube-api-access-stkxx" (OuterVolumeSpecName: "kube-api-access-stkxx") pod "e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4" (UID: "e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4"). InnerVolumeSpecName "kube-api-access-stkxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:01:05 crc kubenswrapper[5043]: I1125 08:01:05.706533 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4" (UID: "e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:01:05 crc kubenswrapper[5043]: I1125 08:01:05.724119 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-config-data" (OuterVolumeSpecName: "config-data") pod "e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4" (UID: "e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:01:05 crc kubenswrapper[5043]: I1125 08:01:05.775864 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:05 crc kubenswrapper[5043]: I1125 08:01:05.776170 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:05 crc kubenswrapper[5043]: I1125 08:01:05.776187 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stkxx\" (UniqueName: \"kubernetes.io/projected/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-kube-api-access-stkxx\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:05 crc kubenswrapper[5043]: I1125 08:01:05.776199 5043 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:06 crc kubenswrapper[5043]: I1125 08:01:06.196504 5043 generic.go:334] "Generic (PLEG): container finished" podID="9642c79a-0f4e-46e8-bdc4-e9b136d27d2f" containerID="8ee3ee5430b7e875617a4da60a3235cd0b0041ef70c6f13c6a8faed5c4a1ec3e" exitCode=0 Nov 25 08:01:06 crc kubenswrapper[5043]: I1125 08:01:06.196644 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jff2w" event={"ID":"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f","Type":"ContainerDied","Data":"8ee3ee5430b7e875617a4da60a3235cd0b0041ef70c6f13c6a8faed5c4a1ec3e"} Nov 25 08:01:06 crc kubenswrapper[5043]: I1125 08:01:06.199924 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400961-6qx6s" event={"ID":"e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4","Type":"ContainerDied","Data":"cd9856e2e610bdbaf01149b30ea5af7a0dd3a9507c9969fa7357bd83be3fabf2"} Nov 25 08:01:06 crc kubenswrapper[5043]: I1125 08:01:06.199963 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd9856e2e610bdbaf01149b30ea5af7a0dd3a9507c9969fa7357bd83be3fabf2" Nov 25 08:01:06 crc kubenswrapper[5043]: I1125 08:01:06.200005 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400961-6qx6s" Nov 25 08:01:07 crc kubenswrapper[5043]: I1125 08:01:07.209326 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jff2w" event={"ID":"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f","Type":"ContainerStarted","Data":"8b8d57bfd30e3818679b64cb3bdd0fb102907ebc9f3c5f550e59aca5481b1343"} Nov 25 08:01:07 crc kubenswrapper[5043]: I1125 08:01:07.226649 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jff2w" podStartSLOduration=2.8172982490000003 podStartE2EDuration="5.226635436s" podCreationTimestamp="2025-11-25 08:01:02 +0000 UTC" firstStartedPulling="2025-11-25 08:01:04.17440522 +0000 UTC m=+2728.342600941" lastFinishedPulling="2025-11-25 08:01:06.583742407 +0000 UTC m=+2730.751938128" observedRunningTime="2025-11-25 08:01:07.222837964 +0000 UTC m=+2731.391033685" watchObservedRunningTime="2025-11-25 08:01:07.226635436 +0000 UTC m=+2731.394831147" Nov 25 08:01:12 crc kubenswrapper[5043]: I1125 08:01:12.862119 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jff2w" Nov 25 08:01:12 crc kubenswrapper[5043]: I1125 08:01:12.862516 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jff2w" Nov 25 08:01:12 crc kubenswrapper[5043]: I1125 08:01:12.908822 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jff2w" Nov 25 08:01:13 crc kubenswrapper[5043]: I1125 08:01:13.305218 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jff2w" Nov 25 08:01:13 crc kubenswrapper[5043]: I1125 08:01:13.989232 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jff2w"] Nov 25 08:01:15 crc kubenswrapper[5043]: I1125 08:01:15.281803 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jff2w" podUID="9642c79a-0f4e-46e8-bdc4-e9b136d27d2f" containerName="registry-server" containerID="cri-o://8b8d57bfd30e3818679b64cb3bdd0fb102907ebc9f3c5f550e59aca5481b1343" gracePeriod=2 Nov 25 08:01:15 crc kubenswrapper[5043]: I1125 08:01:15.778110 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jff2w" Nov 25 08:01:15 crc kubenswrapper[5043]: I1125 08:01:15.787192 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lhrx\" (UniqueName: \"kubernetes.io/projected/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-kube-api-access-6lhrx\") pod \"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f\" (UID: \"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f\") " Nov 25 08:01:15 crc kubenswrapper[5043]: I1125 08:01:15.787387 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-utilities\") pod \"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f\" (UID: \"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f\") " Nov 25 08:01:15 crc kubenswrapper[5043]: I1125 08:01:15.787435 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-catalog-content\") pod \"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f\" (UID: \"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f\") " Nov 25 08:01:15 crc kubenswrapper[5043]: I1125 08:01:15.788674 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-utilities" (OuterVolumeSpecName: "utilities") pod "9642c79a-0f4e-46e8-bdc4-e9b136d27d2f" (UID: "9642c79a-0f4e-46e8-bdc4-e9b136d27d2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:01:15 crc kubenswrapper[5043]: I1125 08:01:15.792762 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-kube-api-access-6lhrx" (OuterVolumeSpecName: "kube-api-access-6lhrx") pod "9642c79a-0f4e-46e8-bdc4-e9b136d27d2f" (UID: "9642c79a-0f4e-46e8-bdc4-e9b136d27d2f"). InnerVolumeSpecName "kube-api-access-6lhrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:01:15 crc kubenswrapper[5043]: I1125 08:01:15.889179 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:15 crc kubenswrapper[5043]: I1125 08:01:15.889488 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lhrx\" (UniqueName: \"kubernetes.io/projected/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-kube-api-access-6lhrx\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:15 crc kubenswrapper[5043]: I1125 08:01:15.913039 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9642c79a-0f4e-46e8-bdc4-e9b136d27d2f" (UID: "9642c79a-0f4e-46e8-bdc4-e9b136d27d2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:01:15 crc kubenswrapper[5043]: I1125 08:01:15.990149 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:16 crc kubenswrapper[5043]: I1125 08:01:16.297742 5043 generic.go:334] "Generic (PLEG): container finished" podID="9642c79a-0f4e-46e8-bdc4-e9b136d27d2f" containerID="8b8d57bfd30e3818679b64cb3bdd0fb102907ebc9f3c5f550e59aca5481b1343" exitCode=0 Nov 25 08:01:16 crc kubenswrapper[5043]: I1125 08:01:16.297801 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jff2w" event={"ID":"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f","Type":"ContainerDied","Data":"8b8d57bfd30e3818679b64cb3bdd0fb102907ebc9f3c5f550e59aca5481b1343"} Nov 25 08:01:16 crc kubenswrapper[5043]: I1125 08:01:16.297839 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jff2w" event={"ID":"9642c79a-0f4e-46e8-bdc4-e9b136d27d2f","Type":"ContainerDied","Data":"d2934e7490f9dcfa67511e8cd8a0bdc08dfbea7f2f50a3831ee5b720aaebac9c"} Nov 25 08:01:16 crc kubenswrapper[5043]: I1125 08:01:16.297866 5043 scope.go:117] "RemoveContainer" containerID="8b8d57bfd30e3818679b64cb3bdd0fb102907ebc9f3c5f550e59aca5481b1343" Nov 25 08:01:16 crc kubenswrapper[5043]: I1125 08:01:16.298047 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jff2w" Nov 25 08:01:16 crc kubenswrapper[5043]: I1125 08:01:16.328856 5043 scope.go:117] "RemoveContainer" containerID="8ee3ee5430b7e875617a4da60a3235cd0b0041ef70c6f13c6a8faed5c4a1ec3e" Nov 25 08:01:16 crc kubenswrapper[5043]: I1125 08:01:16.347526 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jff2w"] Nov 25 08:01:16 crc kubenswrapper[5043]: I1125 08:01:16.358666 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jff2w"] Nov 25 08:01:16 crc kubenswrapper[5043]: I1125 08:01:16.371131 5043 scope.go:117] "RemoveContainer" containerID="ef9d491453b76c7462b7f41f0104c24112cb62452e186f76f824d5a6c67646bf" Nov 25 08:01:16 crc kubenswrapper[5043]: I1125 08:01:16.402502 5043 scope.go:117] "RemoveContainer" containerID="8b8d57bfd30e3818679b64cb3bdd0fb102907ebc9f3c5f550e59aca5481b1343" Nov 25 08:01:16 crc kubenswrapper[5043]: E1125 08:01:16.403014 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b8d57bfd30e3818679b64cb3bdd0fb102907ebc9f3c5f550e59aca5481b1343\": container with ID starting with 8b8d57bfd30e3818679b64cb3bdd0fb102907ebc9f3c5f550e59aca5481b1343 not found: ID does not exist" containerID="8b8d57bfd30e3818679b64cb3bdd0fb102907ebc9f3c5f550e59aca5481b1343" Nov 25 08:01:16 crc kubenswrapper[5043]: I1125 08:01:16.403052 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b8d57bfd30e3818679b64cb3bdd0fb102907ebc9f3c5f550e59aca5481b1343"} err="failed to get container status \"8b8d57bfd30e3818679b64cb3bdd0fb102907ebc9f3c5f550e59aca5481b1343\": rpc error: code = NotFound desc = could not find container \"8b8d57bfd30e3818679b64cb3bdd0fb102907ebc9f3c5f550e59aca5481b1343\": container with ID starting with 8b8d57bfd30e3818679b64cb3bdd0fb102907ebc9f3c5f550e59aca5481b1343 not found: ID does not exist" Nov 25 08:01:16 crc kubenswrapper[5043]: I1125 08:01:16.403080 5043 scope.go:117] "RemoveContainer" containerID="8ee3ee5430b7e875617a4da60a3235cd0b0041ef70c6f13c6a8faed5c4a1ec3e" Nov 25 08:01:16 crc kubenswrapper[5043]: E1125 08:01:16.403437 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ee3ee5430b7e875617a4da60a3235cd0b0041ef70c6f13c6a8faed5c4a1ec3e\": container with ID starting with 8ee3ee5430b7e875617a4da60a3235cd0b0041ef70c6f13c6a8faed5c4a1ec3e not found: ID does not exist" containerID="8ee3ee5430b7e875617a4da60a3235cd0b0041ef70c6f13c6a8faed5c4a1ec3e" Nov 25 08:01:16 crc kubenswrapper[5043]: I1125 08:01:16.403464 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ee3ee5430b7e875617a4da60a3235cd0b0041ef70c6f13c6a8faed5c4a1ec3e"} err="failed to get container status \"8ee3ee5430b7e875617a4da60a3235cd0b0041ef70c6f13c6a8faed5c4a1ec3e\": rpc error: code = NotFound desc = could not find container \"8ee3ee5430b7e875617a4da60a3235cd0b0041ef70c6f13c6a8faed5c4a1ec3e\": container with ID starting with 8ee3ee5430b7e875617a4da60a3235cd0b0041ef70c6f13c6a8faed5c4a1ec3e not found: ID does not exist" Nov 25 08:01:16 crc kubenswrapper[5043]: I1125 08:01:16.403481 5043 scope.go:117] "RemoveContainer" containerID="ef9d491453b76c7462b7f41f0104c24112cb62452e186f76f824d5a6c67646bf" Nov 25 08:01:16 crc kubenswrapper[5043]: E1125 08:01:16.403902 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef9d491453b76c7462b7f41f0104c24112cb62452e186f76f824d5a6c67646bf\": container with ID starting with ef9d491453b76c7462b7f41f0104c24112cb62452e186f76f824d5a6c67646bf not found: ID does not exist" containerID="ef9d491453b76c7462b7f41f0104c24112cb62452e186f76f824d5a6c67646bf" Nov 25 08:01:16 crc kubenswrapper[5043]: I1125 08:01:16.403947 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef9d491453b76c7462b7f41f0104c24112cb62452e186f76f824d5a6c67646bf"} err="failed to get container status \"ef9d491453b76c7462b7f41f0104c24112cb62452e186f76f824d5a6c67646bf\": rpc error: code = NotFound desc = could not find container \"ef9d491453b76c7462b7f41f0104c24112cb62452e186f76f824d5a6c67646bf\": container with ID starting with ef9d491453b76c7462b7f41f0104c24112cb62452e186f76f824d5a6c67646bf not found: ID does not exist" Nov 25 08:01:16 crc kubenswrapper[5043]: I1125 08:01:16.975809 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9642c79a-0f4e-46e8-bdc4-e9b136d27d2f" path="/var/lib/kubelet/pods/9642c79a-0f4e-46e8-bdc4-e9b136d27d2f/volumes" Nov 25 08:01:17 crc kubenswrapper[5043]: I1125 08:01:17.278043 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:01:17 crc kubenswrapper[5043]: I1125 08:01:17.278093 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:01:17 crc kubenswrapper[5043]: I1125 08:01:17.278134 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 08:01:17 crc kubenswrapper[5043]: I1125 08:01:17.279036 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3b371f6f213dc1b8fdb5b0f0313c26428ee4292c66cb96a1f2daaf1fae57efc"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:01:17 crc kubenswrapper[5043]: I1125 08:01:17.279105 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://f3b371f6f213dc1b8fdb5b0f0313c26428ee4292c66cb96a1f2daaf1fae57efc" gracePeriod=600 Nov 25 08:01:18 crc kubenswrapper[5043]: I1125 08:01:18.324277 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="f3b371f6f213dc1b8fdb5b0f0313c26428ee4292c66cb96a1f2daaf1fae57efc" exitCode=0 Nov 25 08:01:18 crc kubenswrapper[5043]: I1125 08:01:18.324396 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"f3b371f6f213dc1b8fdb5b0f0313c26428ee4292c66cb96a1f2daaf1fae57efc"} Nov 25 08:01:18 crc kubenswrapper[5043]: I1125 08:01:18.324890 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee"} Nov 25 08:01:18 crc kubenswrapper[5043]: I1125 08:01:18.324916 5043 scope.go:117] "RemoveContainer" containerID="c3a4691393b9d6624b6e1af793451bba2de8b361efa406c958d1fedc8107d590" Nov 25 08:01:40 crc kubenswrapper[5043]: I1125 08:01:40.524694 5043 generic.go:334] "Generic (PLEG): container finished" podID="3071ef74-1c72-4b4c-90e7-fee9dc8332e5" containerID="1a7aed6b7c180db6c05be24c5ac3543af05dd464b12c86cf6473dc06fcd16343" exitCode=0 Nov 25 08:01:40 crc kubenswrapper[5043]: I1125 08:01:40.524796 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" event={"ID":"3071ef74-1c72-4b4c-90e7-fee9dc8332e5","Type":"ContainerDied","Data":"1a7aed6b7c180db6c05be24c5ac3543af05dd464b12c86cf6473dc06fcd16343"} Nov 25 08:01:41 crc kubenswrapper[5043]: I1125 08:01:41.941282 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.088968 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-ceph\") pod \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.089115 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-inventory\") pod \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.089224 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-neutron-metadata-combined-ca-bundle\") pod \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.089295 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-nova-metadata-neutron-config-0\") pod \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.089397 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-ssh-key\") pod \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.089438 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.089480 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wgtc\" (UniqueName: \"kubernetes.io/projected/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-kube-api-access-7wgtc\") pod \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\" (UID: \"3071ef74-1c72-4b4c-90e7-fee9dc8332e5\") " Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.096811 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3071ef74-1c72-4b4c-90e7-fee9dc8332e5" (UID: "3071ef74-1c72-4b4c-90e7-fee9dc8332e5"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.096933 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-ceph" (OuterVolumeSpecName: "ceph") pod "3071ef74-1c72-4b4c-90e7-fee9dc8332e5" (UID: "3071ef74-1c72-4b4c-90e7-fee9dc8332e5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.097193 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-kube-api-access-7wgtc" (OuterVolumeSpecName: "kube-api-access-7wgtc") pod "3071ef74-1c72-4b4c-90e7-fee9dc8332e5" (UID: "3071ef74-1c72-4b4c-90e7-fee9dc8332e5"). InnerVolumeSpecName "kube-api-access-7wgtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.121182 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "3071ef74-1c72-4b4c-90e7-fee9dc8332e5" (UID: "3071ef74-1c72-4b4c-90e7-fee9dc8332e5"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.123137 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "3071ef74-1c72-4b4c-90e7-fee9dc8332e5" (UID: "3071ef74-1c72-4b4c-90e7-fee9dc8332e5"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.123131 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-inventory" (OuterVolumeSpecName: "inventory") pod "3071ef74-1c72-4b4c-90e7-fee9dc8332e5" (UID: "3071ef74-1c72-4b4c-90e7-fee9dc8332e5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.128799 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3071ef74-1c72-4b4c-90e7-fee9dc8332e5" (UID: "3071ef74-1c72-4b4c-90e7-fee9dc8332e5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.192162 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.192198 5043 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.192214 5043 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.192226 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.192238 5043 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.192250 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wgtc\" (UniqueName: \"kubernetes.io/projected/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-kube-api-access-7wgtc\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.192262 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3071ef74-1c72-4b4c-90e7-fee9dc8332e5-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.542980 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" event={"ID":"3071ef74-1c72-4b4c-90e7-fee9dc8332e5","Type":"ContainerDied","Data":"5c7cdad471b8ac9819091da610def16df0afa560a2bef083b531147f97ac4f27"} Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.543024 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c7cdad471b8ac9819091da610def16df0afa560a2bef083b531147f97ac4f27" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.543070 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.645544 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl"] Nov 25 08:01:42 crc kubenswrapper[5043]: E1125 08:01:42.646142 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9642c79a-0f4e-46e8-bdc4-e9b136d27d2f" containerName="extract-utilities" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.646164 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="9642c79a-0f4e-46e8-bdc4-e9b136d27d2f" containerName="extract-utilities" Nov 25 08:01:42 crc kubenswrapper[5043]: E1125 08:01:42.646181 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9642c79a-0f4e-46e8-bdc4-e9b136d27d2f" containerName="extract-content" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.646191 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="9642c79a-0f4e-46e8-bdc4-e9b136d27d2f" containerName="extract-content" Nov 25 08:01:42 crc kubenswrapper[5043]: E1125 08:01:42.646205 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4" containerName="keystone-cron" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.646214 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4" containerName="keystone-cron" Nov 25 08:01:42 crc kubenswrapper[5043]: E1125 08:01:42.646225 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3071ef74-1c72-4b4c-90e7-fee9dc8332e5" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.646234 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3071ef74-1c72-4b4c-90e7-fee9dc8332e5" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 08:01:42 crc kubenswrapper[5043]: E1125 08:01:42.646248 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9642c79a-0f4e-46e8-bdc4-e9b136d27d2f" containerName="registry-server" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.646256 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="9642c79a-0f4e-46e8-bdc4-e9b136d27d2f" containerName="registry-server" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.646457 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="3071ef74-1c72-4b4c-90e7-fee9dc8332e5" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.646485 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="9642c79a-0f4e-46e8-bdc4-e9b136d27d2f" containerName="registry-server" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.646504 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4" containerName="keystone-cron" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.647290 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.650665 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.650972 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.650990 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.650991 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.651533 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.651659 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.656670 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl"] Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.700723 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.700980 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.701083 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.701217 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.701268 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.701299 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp2hr\" (UniqueName: \"kubernetes.io/projected/e7416361-3a03-4892-9a17-36934133905d-kube-api-access-dp2hr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.817503 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.817593 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.817677 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.817703 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.817737 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp2hr\" (UniqueName: \"kubernetes.io/projected/e7416361-3a03-4892-9a17-36934133905d-kube-api-access-dp2hr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.817890 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.826363 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.826363 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.826548 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.831233 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.834243 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.840315 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp2hr\" (UniqueName: \"kubernetes.io/projected/e7416361-3a03-4892-9a17-36934133905d-kube-api-access-dp2hr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:42 crc kubenswrapper[5043]: I1125 08:01:42.968588 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:01:43 crc kubenswrapper[5043]: I1125 08:01:43.493747 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl"] Nov 25 08:01:43 crc kubenswrapper[5043]: I1125 08:01:43.553770 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" event={"ID":"e7416361-3a03-4892-9a17-36934133905d","Type":"ContainerStarted","Data":"daf3c75a753412df22566718d924e22a2cbb3d8199cb5f2ef6db90c9f0719437"} Nov 25 08:01:44 crc kubenswrapper[5043]: I1125 08:01:44.562848 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" event={"ID":"e7416361-3a03-4892-9a17-36934133905d","Type":"ContainerStarted","Data":"00bcd26ab79e2785b20882141fe8475ec55a24ad889f77246c2836e6602cb32e"} Nov 25 08:01:44 crc kubenswrapper[5043]: I1125 08:01:44.578376 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" podStartSLOduration=2.176162329 podStartE2EDuration="2.578359203s" podCreationTimestamp="2025-11-25 08:01:42 +0000 UTC" firstStartedPulling="2025-11-25 08:01:43.498386633 +0000 UTC m=+2767.666582364" lastFinishedPulling="2025-11-25 08:01:43.900583487 +0000 UTC m=+2768.068779238" observedRunningTime="2025-11-25 08:01:44.577471339 +0000 UTC m=+2768.745667060" watchObservedRunningTime="2025-11-25 08:01:44.578359203 +0000 UTC m=+2768.746554924" Nov 25 08:01:49 crc kubenswrapper[5043]: I1125 08:01:49.774083 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s27gv"] Nov 25 08:01:49 crc kubenswrapper[5043]: I1125 08:01:49.776592 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s27gv" Nov 25 08:01:49 crc kubenswrapper[5043]: I1125 08:01:49.781223 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-catalog-content\") pod \"redhat-marketplace-s27gv\" (UID: \"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2\") " pod="openshift-marketplace/redhat-marketplace-s27gv" Nov 25 08:01:49 crc kubenswrapper[5043]: I1125 08:01:49.781322 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gmqw\" (UniqueName: \"kubernetes.io/projected/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-kube-api-access-7gmqw\") pod \"redhat-marketplace-s27gv\" (UID: \"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2\") " pod="openshift-marketplace/redhat-marketplace-s27gv" Nov 25 08:01:49 crc kubenswrapper[5043]: I1125 08:01:49.781375 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-utilities\") pod \"redhat-marketplace-s27gv\" (UID: \"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2\") " pod="openshift-marketplace/redhat-marketplace-s27gv" Nov 25 08:01:49 crc kubenswrapper[5043]: I1125 08:01:49.781900 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s27gv"] Nov 25 08:01:49 crc kubenswrapper[5043]: I1125 08:01:49.883483 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-catalog-content\") pod \"redhat-marketplace-s27gv\" (UID: \"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2\") " pod="openshift-marketplace/redhat-marketplace-s27gv" Nov 25 08:01:49 crc kubenswrapper[5043]: I1125 08:01:49.883823 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gmqw\" (UniqueName: \"kubernetes.io/projected/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-kube-api-access-7gmqw\") pod \"redhat-marketplace-s27gv\" (UID: \"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2\") " pod="openshift-marketplace/redhat-marketplace-s27gv" Nov 25 08:01:49 crc kubenswrapper[5043]: I1125 08:01:49.883935 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-utilities\") pod \"redhat-marketplace-s27gv\" (UID: \"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2\") " pod="openshift-marketplace/redhat-marketplace-s27gv" Nov 25 08:01:49 crc kubenswrapper[5043]: I1125 08:01:49.883974 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-catalog-content\") pod \"redhat-marketplace-s27gv\" (UID: \"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2\") " pod="openshift-marketplace/redhat-marketplace-s27gv" Nov 25 08:01:49 crc kubenswrapper[5043]: I1125 08:01:49.884279 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-utilities\") pod \"redhat-marketplace-s27gv\" (UID: \"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2\") " pod="openshift-marketplace/redhat-marketplace-s27gv" Nov 25 08:01:49 crc kubenswrapper[5043]: I1125 08:01:49.902059 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gmqw\" (UniqueName: \"kubernetes.io/projected/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-kube-api-access-7gmqw\") pod \"redhat-marketplace-s27gv\" (UID: \"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2\") " pod="openshift-marketplace/redhat-marketplace-s27gv" Nov 25 08:01:50 crc kubenswrapper[5043]: I1125 08:01:50.102152 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s27gv" Nov 25 08:01:50 crc kubenswrapper[5043]: I1125 08:01:50.554041 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s27gv"] Nov 25 08:01:50 crc kubenswrapper[5043]: W1125 08:01:50.560481 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1ab48ca_f2e8_4b83_83be_9f777f8ff5f2.slice/crio-5ea64641b6bfc13b388a97f223c63b962497e9c1659c8645a7143721e32c66a6 WatchSource:0}: Error finding container 5ea64641b6bfc13b388a97f223c63b962497e9c1659c8645a7143721e32c66a6: Status 404 returned error can't find the container with id 5ea64641b6bfc13b388a97f223c63b962497e9c1659c8645a7143721e32c66a6 Nov 25 08:01:50 crc kubenswrapper[5043]: I1125 08:01:50.610386 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s27gv" event={"ID":"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2","Type":"ContainerStarted","Data":"5ea64641b6bfc13b388a97f223c63b962497e9c1659c8645a7143721e32c66a6"} Nov 25 08:01:51 crc kubenswrapper[5043]: I1125 08:01:51.627029 5043 generic.go:334] "Generic (PLEG): container finished" podID="c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2" containerID="f9ca9fbe7e052ad0b74676d638d93ae2ea2ad9cf4cba13e3ccac0327caedeb65" exitCode=0 Nov 25 08:01:51 crc kubenswrapper[5043]: I1125 08:01:51.627084 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s27gv" event={"ID":"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2","Type":"ContainerDied","Data":"f9ca9fbe7e052ad0b74676d638d93ae2ea2ad9cf4cba13e3ccac0327caedeb65"} Nov 25 08:01:52 crc kubenswrapper[5043]: I1125 08:01:52.637832 5043 generic.go:334] "Generic (PLEG): container finished" podID="c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2" containerID="498dfefc0b19071189065b8625a728356a49b745ce0e3f60750c7b9bfa98e578" exitCode=0 Nov 25 08:01:52 crc kubenswrapper[5043]: I1125 08:01:52.638305 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s27gv" event={"ID":"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2","Type":"ContainerDied","Data":"498dfefc0b19071189065b8625a728356a49b745ce0e3f60750c7b9bfa98e578"} Nov 25 08:01:53 crc kubenswrapper[5043]: I1125 08:01:53.651772 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s27gv" event={"ID":"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2","Type":"ContainerStarted","Data":"32414dccb550fac2180cd884691e405d4e97c044155f8e54316a66c4d158b8b2"} Nov 25 08:02:00 crc kubenswrapper[5043]: I1125 08:02:00.102736 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s27gv" Nov 25 08:02:00 crc kubenswrapper[5043]: I1125 08:02:00.104375 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s27gv" Nov 25 08:02:00 crc kubenswrapper[5043]: I1125 08:02:00.156700 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s27gv" Nov 25 08:02:00 crc kubenswrapper[5043]: I1125 08:02:00.179905 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s27gv" podStartSLOduration=9.77458575 podStartE2EDuration="11.179885878s" podCreationTimestamp="2025-11-25 08:01:49 +0000 UTC" firstStartedPulling="2025-11-25 08:01:51.629504615 +0000 UTC m=+2775.797700346" lastFinishedPulling="2025-11-25 08:01:53.034804743 +0000 UTC m=+2777.203000474" observedRunningTime="2025-11-25 08:01:53.682688816 +0000 UTC m=+2777.850884567" watchObservedRunningTime="2025-11-25 08:02:00.179885878 +0000 UTC m=+2784.348081599" Nov 25 08:02:00 crc kubenswrapper[5043]: I1125 08:02:00.803768 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s27gv" Nov 25 08:02:00 crc kubenswrapper[5043]: I1125 08:02:00.849580 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s27gv"] Nov 25 08:02:02 crc kubenswrapper[5043]: I1125 08:02:02.740853 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s27gv" podUID="c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2" containerName="registry-server" containerID="cri-o://32414dccb550fac2180cd884691e405d4e97c044155f8e54316a66c4d158b8b2" gracePeriod=2 Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.189421 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s27gv" Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.338919 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-utilities\") pod \"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2\" (UID: \"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2\") " Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.339294 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-catalog-content\") pod \"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2\" (UID: \"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2\") " Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.339493 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gmqw\" (UniqueName: \"kubernetes.io/projected/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-kube-api-access-7gmqw\") pod \"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2\" (UID: \"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2\") " Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.339820 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-utilities" (OuterVolumeSpecName: "utilities") pod "c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2" (UID: "c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.345331 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-kube-api-access-7gmqw" (OuterVolumeSpecName: "kube-api-access-7gmqw") pod "c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2" (UID: "c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2"). InnerVolumeSpecName "kube-api-access-7gmqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.356788 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2" (UID: "c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.441931 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gmqw\" (UniqueName: \"kubernetes.io/projected/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-kube-api-access-7gmqw\") on node \"crc\" DevicePath \"\"" Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.441968 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.441979 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.751648 5043 generic.go:334] "Generic (PLEG): container finished" podID="c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2" containerID="32414dccb550fac2180cd884691e405d4e97c044155f8e54316a66c4d158b8b2" exitCode=0 Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.751700 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s27gv" event={"ID":"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2","Type":"ContainerDied","Data":"32414dccb550fac2180cd884691e405d4e97c044155f8e54316a66c4d158b8b2"} Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.751707 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s27gv" Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.751737 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s27gv" event={"ID":"c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2","Type":"ContainerDied","Data":"5ea64641b6bfc13b388a97f223c63b962497e9c1659c8645a7143721e32c66a6"} Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.751760 5043 scope.go:117] "RemoveContainer" containerID="32414dccb550fac2180cd884691e405d4e97c044155f8e54316a66c4d158b8b2" Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.772993 5043 scope.go:117] "RemoveContainer" containerID="498dfefc0b19071189065b8625a728356a49b745ce0e3f60750c7b9bfa98e578" Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.797388 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s27gv"] Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.807055 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s27gv"] Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.812825 5043 scope.go:117] "RemoveContainer" containerID="f9ca9fbe7e052ad0b74676d638d93ae2ea2ad9cf4cba13e3ccac0327caedeb65" Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.844251 5043 scope.go:117] "RemoveContainer" containerID="32414dccb550fac2180cd884691e405d4e97c044155f8e54316a66c4d158b8b2" Nov 25 08:02:03 crc kubenswrapper[5043]: E1125 08:02:03.844802 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32414dccb550fac2180cd884691e405d4e97c044155f8e54316a66c4d158b8b2\": container with ID starting with 32414dccb550fac2180cd884691e405d4e97c044155f8e54316a66c4d158b8b2 not found: ID does not exist" containerID="32414dccb550fac2180cd884691e405d4e97c044155f8e54316a66c4d158b8b2" Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.844837 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32414dccb550fac2180cd884691e405d4e97c044155f8e54316a66c4d158b8b2"} err="failed to get container status \"32414dccb550fac2180cd884691e405d4e97c044155f8e54316a66c4d158b8b2\": rpc error: code = NotFound desc = could not find container \"32414dccb550fac2180cd884691e405d4e97c044155f8e54316a66c4d158b8b2\": container with ID starting with 32414dccb550fac2180cd884691e405d4e97c044155f8e54316a66c4d158b8b2 not found: ID does not exist" Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.844861 5043 scope.go:117] "RemoveContainer" containerID="498dfefc0b19071189065b8625a728356a49b745ce0e3f60750c7b9bfa98e578" Nov 25 08:02:03 crc kubenswrapper[5043]: E1125 08:02:03.845180 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"498dfefc0b19071189065b8625a728356a49b745ce0e3f60750c7b9bfa98e578\": container with ID starting with 498dfefc0b19071189065b8625a728356a49b745ce0e3f60750c7b9bfa98e578 not found: ID does not exist" containerID="498dfefc0b19071189065b8625a728356a49b745ce0e3f60750c7b9bfa98e578" Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.845204 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498dfefc0b19071189065b8625a728356a49b745ce0e3f60750c7b9bfa98e578"} err="failed to get container status \"498dfefc0b19071189065b8625a728356a49b745ce0e3f60750c7b9bfa98e578\": rpc error: code = NotFound desc = could not find container \"498dfefc0b19071189065b8625a728356a49b745ce0e3f60750c7b9bfa98e578\": container with ID starting with 498dfefc0b19071189065b8625a728356a49b745ce0e3f60750c7b9bfa98e578 not found: ID does not exist" Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.845220 5043 scope.go:117] "RemoveContainer" containerID="f9ca9fbe7e052ad0b74676d638d93ae2ea2ad9cf4cba13e3ccac0327caedeb65" Nov 25 08:02:03 crc kubenswrapper[5043]: E1125 08:02:03.845516 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9ca9fbe7e052ad0b74676d638d93ae2ea2ad9cf4cba13e3ccac0327caedeb65\": container with ID starting with f9ca9fbe7e052ad0b74676d638d93ae2ea2ad9cf4cba13e3ccac0327caedeb65 not found: ID does not exist" containerID="f9ca9fbe7e052ad0b74676d638d93ae2ea2ad9cf4cba13e3ccac0327caedeb65" Nov 25 08:02:03 crc kubenswrapper[5043]: I1125 08:02:03.845715 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9ca9fbe7e052ad0b74676d638d93ae2ea2ad9cf4cba13e3ccac0327caedeb65"} err="failed to get container status \"f9ca9fbe7e052ad0b74676d638d93ae2ea2ad9cf4cba13e3ccac0327caedeb65\": rpc error: code = NotFound desc = could not find container \"f9ca9fbe7e052ad0b74676d638d93ae2ea2ad9cf4cba13e3ccac0327caedeb65\": container with ID starting with f9ca9fbe7e052ad0b74676d638d93ae2ea2ad9cf4cba13e3ccac0327caedeb65 not found: ID does not exist" Nov 25 08:02:04 crc kubenswrapper[5043]: I1125 08:02:04.974444 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2" path="/var/lib/kubelet/pods/c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2/volumes" Nov 25 08:03:17 crc kubenswrapper[5043]: I1125 08:03:17.276157 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:03:17 crc kubenswrapper[5043]: I1125 08:03:17.276803 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:03:47 crc kubenswrapper[5043]: I1125 08:03:47.275852 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:03:47 crc kubenswrapper[5043]: I1125 08:03:47.276710 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:04:17 crc kubenswrapper[5043]: I1125 08:04:17.275813 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:04:17 crc kubenswrapper[5043]: I1125 08:04:17.277304 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:04:17 crc kubenswrapper[5043]: I1125 08:04:17.277413 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 08:04:17 crc kubenswrapper[5043]: I1125 08:04:17.278212 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:04:17 crc kubenswrapper[5043]: I1125 08:04:17.278368 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" gracePeriod=600 Nov 25 08:04:17 crc kubenswrapper[5043]: E1125 08:04:17.412882 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:04:17 crc kubenswrapper[5043]: I1125 08:04:17.889784 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" exitCode=0 Nov 25 08:04:17 crc kubenswrapper[5043]: I1125 08:04:17.889815 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee"} Nov 25 08:04:17 crc kubenswrapper[5043]: I1125 08:04:17.889899 5043 scope.go:117] "RemoveContainer" containerID="f3b371f6f213dc1b8fdb5b0f0313c26428ee4292c66cb96a1f2daaf1fae57efc" Nov 25 08:04:17 crc kubenswrapper[5043]: I1125 08:04:17.890657 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:04:17 crc kubenswrapper[5043]: E1125 08:04:17.891206 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:04:32 crc kubenswrapper[5043]: I1125 08:04:32.963843 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:04:32 crc kubenswrapper[5043]: E1125 08:04:32.965006 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:04:45 crc kubenswrapper[5043]: I1125 08:04:45.963247 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:04:45 crc kubenswrapper[5043]: E1125 08:04:45.964138 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:04:59 crc kubenswrapper[5043]: I1125 08:04:59.963151 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:04:59 crc kubenswrapper[5043]: E1125 08:04:59.965100 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:05:13 crc kubenswrapper[5043]: I1125 08:05:13.963331 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:05:13 crc kubenswrapper[5043]: E1125 08:05:13.965072 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:05:24 crc kubenswrapper[5043]: I1125 08:05:24.962513 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:05:24 crc kubenswrapper[5043]: E1125 08:05:24.963283 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:05:37 crc kubenswrapper[5043]: I1125 08:05:37.963634 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:05:37 crc kubenswrapper[5043]: E1125 08:05:37.964495 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:05:48 crc kubenswrapper[5043]: I1125 08:05:48.962493 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:05:48 crc kubenswrapper[5043]: E1125 08:05:48.964498 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:06:00 crc kubenswrapper[5043]: I1125 08:06:00.962894 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:06:00 crc kubenswrapper[5043]: E1125 08:06:00.963638 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:06:13 crc kubenswrapper[5043]: I1125 08:06:13.963990 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:06:13 crc kubenswrapper[5043]: E1125 08:06:13.965497 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:06:26 crc kubenswrapper[5043]: I1125 08:06:26.970543 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:06:26 crc kubenswrapper[5043]: E1125 08:06:26.971459 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:06:41 crc kubenswrapper[5043]: I1125 08:06:41.962820 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:06:41 crc kubenswrapper[5043]: E1125 08:06:41.965059 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:06:42 crc kubenswrapper[5043]: I1125 08:06:42.665548 5043 generic.go:334] "Generic (PLEG): container finished" podID="e7416361-3a03-4892-9a17-36934133905d" containerID="00bcd26ab79e2785b20882141fe8475ec55a24ad889f77246c2836e6602cb32e" exitCode=0 Nov 25 08:06:42 crc kubenswrapper[5043]: I1125 08:06:42.665685 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" event={"ID":"e7416361-3a03-4892-9a17-36934133905d","Type":"ContainerDied","Data":"00bcd26ab79e2785b20882141fe8475ec55a24ad889f77246c2836e6602cb32e"} Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.057410 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.076102 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-libvirt-combined-ca-bundle\") pod \"e7416361-3a03-4892-9a17-36934133905d\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.076394 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-libvirt-secret-0\") pod \"e7416361-3a03-4892-9a17-36934133905d\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.076519 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-ceph\") pod \"e7416361-3a03-4892-9a17-36934133905d\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.076686 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp2hr\" (UniqueName: \"kubernetes.io/projected/e7416361-3a03-4892-9a17-36934133905d-kube-api-access-dp2hr\") pod \"e7416361-3a03-4892-9a17-36934133905d\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.076760 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-inventory\") pod \"e7416361-3a03-4892-9a17-36934133905d\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.076777 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-ssh-key\") pod \"e7416361-3a03-4892-9a17-36934133905d\" (UID: \"e7416361-3a03-4892-9a17-36934133905d\") " Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.082837 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-ceph" (OuterVolumeSpecName: "ceph") pod "e7416361-3a03-4892-9a17-36934133905d" (UID: "e7416361-3a03-4892-9a17-36934133905d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.083557 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7416361-3a03-4892-9a17-36934133905d-kube-api-access-dp2hr" (OuterVolumeSpecName: "kube-api-access-dp2hr") pod "e7416361-3a03-4892-9a17-36934133905d" (UID: "e7416361-3a03-4892-9a17-36934133905d"). InnerVolumeSpecName "kube-api-access-dp2hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.083566 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e7416361-3a03-4892-9a17-36934133905d" (UID: "e7416361-3a03-4892-9a17-36934133905d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.106183 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-inventory" (OuterVolumeSpecName: "inventory") pod "e7416361-3a03-4892-9a17-36934133905d" (UID: "e7416361-3a03-4892-9a17-36934133905d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.108964 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e7416361-3a03-4892-9a17-36934133905d" (UID: "e7416361-3a03-4892-9a17-36934133905d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.113094 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "e7416361-3a03-4892-9a17-36934133905d" (UID: "e7416361-3a03-4892-9a17-36934133905d"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.179827 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.179871 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp2hr\" (UniqueName: \"kubernetes.io/projected/e7416361-3a03-4892-9a17-36934133905d-kube-api-access-dp2hr\") on node \"crc\" DevicePath \"\"" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.179885 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.179898 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.179910 5043 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.179922 5043 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e7416361-3a03-4892-9a17-36934133905d-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.686275 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" event={"ID":"e7416361-3a03-4892-9a17-36934133905d","Type":"ContainerDied","Data":"daf3c75a753412df22566718d924e22a2cbb3d8199cb5f2ef6db90c9f0719437"} Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.686326 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daf3c75a753412df22566718d924e22a2cbb3d8199cb5f2ef6db90c9f0719437" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.686403 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.801483 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch"] Nov 25 08:06:44 crc kubenswrapper[5043]: E1125 08:06:44.801854 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2" containerName="registry-server" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.801871 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2" containerName="registry-server" Nov 25 08:06:44 crc kubenswrapper[5043]: E1125 08:06:44.801898 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7416361-3a03-4892-9a17-36934133905d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.801905 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7416361-3a03-4892-9a17-36934133905d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 08:06:44 crc kubenswrapper[5043]: E1125 08:06:44.801919 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2" containerName="extract-utilities" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.801926 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2" containerName="extract-utilities" Nov 25 08:06:44 crc kubenswrapper[5043]: E1125 08:06:44.801938 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2" containerName="extract-content" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.801943 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2" containerName="extract-content" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.802106 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ab48ca-f2e8-4b83-83be-9f777f8ff5f2" containerName="registry-server" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.802127 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7416361-3a03-4892-9a17-36934133905d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.802818 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.804748 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.804794 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.804925 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ptmq2" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.805649 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.806328 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.806383 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.806395 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.806395 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.806680 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.818199 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch"] Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.891229 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.891284 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.891313 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.891385 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.891462 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.891498 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.891566 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.891614 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g745f\" (UniqueName: \"kubernetes.io/projected/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-kube-api-access-g745f\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.891684 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.891712 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.891739 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.992713 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.992775 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.992797 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.992873 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.992906 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g745f\" (UniqueName: \"kubernetes.io/projected/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-kube-api-access-g745f\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.992937 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.992957 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.992980 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.993015 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.993041 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.993061 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.994123 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.994125 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.996897 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.997027 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.997659 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.998073 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.998472 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:44 crc kubenswrapper[5043]: I1125 08:06:44.998482 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:45 crc kubenswrapper[5043]: I1125 08:06:44.999572 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:45 crc kubenswrapper[5043]: I1125 08:06:45.006428 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:45 crc kubenswrapper[5043]: I1125 08:06:45.009502 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g745f\" (UniqueName: \"kubernetes.io/projected/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-kube-api-access-g745f\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:45 crc kubenswrapper[5043]: I1125 08:06:45.118582 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:06:45 crc kubenswrapper[5043]: I1125 08:06:45.649716 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch"] Nov 25 08:06:45 crc kubenswrapper[5043]: I1125 08:06:45.650078 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:06:45 crc kubenswrapper[5043]: I1125 08:06:45.695038 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" event={"ID":"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada","Type":"ContainerStarted","Data":"0fa89d9053451463b1b3862551890f2ef0144a8a1379352c39b39fcde72dd3ce"} Nov 25 08:06:46 crc kubenswrapper[5043]: I1125 08:06:46.702443 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" event={"ID":"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada","Type":"ContainerStarted","Data":"f14c52d1b2ca297e52b1bfc179e23925377717efd6d5604fef138128a0dae37e"} Nov 25 08:06:46 crc kubenswrapper[5043]: I1125 08:06:46.724176 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" podStartSLOduration=2.168096929 podStartE2EDuration="2.724158853s" podCreationTimestamp="2025-11-25 08:06:44 +0000 UTC" firstStartedPulling="2025-11-25 08:06:45.649766287 +0000 UTC m=+3069.817962008" lastFinishedPulling="2025-11-25 08:06:46.205828191 +0000 UTC m=+3070.374023932" observedRunningTime="2025-11-25 08:06:46.71734685 +0000 UTC m=+3070.885542571" watchObservedRunningTime="2025-11-25 08:06:46.724158853 +0000 UTC m=+3070.892354574" Nov 25 08:06:55 crc kubenswrapper[5043]: I1125 08:06:55.963175 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:06:55 crc kubenswrapper[5043]: E1125 08:06:55.963942 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:07:10 crc kubenswrapper[5043]: I1125 08:07:10.964008 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:07:10 crc kubenswrapper[5043]: E1125 08:07:10.966311 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:07:22 crc kubenswrapper[5043]: I1125 08:07:22.963387 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:07:22 crc kubenswrapper[5043]: E1125 08:07:22.964229 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:07:36 crc kubenswrapper[5043]: I1125 08:07:36.970418 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:07:36 crc kubenswrapper[5043]: E1125 08:07:36.971432 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:07:51 crc kubenswrapper[5043]: I1125 08:07:51.962898 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:07:51 crc kubenswrapper[5043]: E1125 08:07:51.963677 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:08:06 crc kubenswrapper[5043]: I1125 08:08:06.970892 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:08:06 crc kubenswrapper[5043]: E1125 08:08:06.971982 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:08:19 crc kubenswrapper[5043]: I1125 08:08:19.963082 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:08:19 crc kubenswrapper[5043]: E1125 08:08:19.963832 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:08:23 crc kubenswrapper[5043]: I1125 08:08:23.208954 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rsvhx"] Nov 25 08:08:23 crc kubenswrapper[5043]: I1125 08:08:23.218137 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rsvhx" Nov 25 08:08:23 crc kubenswrapper[5043]: I1125 08:08:23.263377 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rsvhx"] Nov 25 08:08:23 crc kubenswrapper[5043]: I1125 08:08:23.385073 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad19d6e-732d-42bd-97a5-b755b756aabb-catalog-content\") pod \"certified-operators-rsvhx\" (UID: \"dad19d6e-732d-42bd-97a5-b755b756aabb\") " pod="openshift-marketplace/certified-operators-rsvhx" Nov 25 08:08:23 crc kubenswrapper[5043]: I1125 08:08:23.385163 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjs6h\" (UniqueName: \"kubernetes.io/projected/dad19d6e-732d-42bd-97a5-b755b756aabb-kube-api-access-cjs6h\") pod \"certified-operators-rsvhx\" (UID: \"dad19d6e-732d-42bd-97a5-b755b756aabb\") " pod="openshift-marketplace/certified-operators-rsvhx" Nov 25 08:08:23 crc kubenswrapper[5043]: I1125 08:08:23.385241 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad19d6e-732d-42bd-97a5-b755b756aabb-utilities\") pod \"certified-operators-rsvhx\" (UID: \"dad19d6e-732d-42bd-97a5-b755b756aabb\") " pod="openshift-marketplace/certified-operators-rsvhx" Nov 25 08:08:23 crc kubenswrapper[5043]: I1125 08:08:23.487161 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad19d6e-732d-42bd-97a5-b755b756aabb-catalog-content\") pod \"certified-operators-rsvhx\" (UID: \"dad19d6e-732d-42bd-97a5-b755b756aabb\") " pod="openshift-marketplace/certified-operators-rsvhx" Nov 25 08:08:23 crc kubenswrapper[5043]: I1125 08:08:23.487290 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjs6h\" (UniqueName: \"kubernetes.io/projected/dad19d6e-732d-42bd-97a5-b755b756aabb-kube-api-access-cjs6h\") pod \"certified-operators-rsvhx\" (UID: \"dad19d6e-732d-42bd-97a5-b755b756aabb\") " pod="openshift-marketplace/certified-operators-rsvhx" Nov 25 08:08:23 crc kubenswrapper[5043]: I1125 08:08:23.487333 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad19d6e-732d-42bd-97a5-b755b756aabb-utilities\") pod \"certified-operators-rsvhx\" (UID: \"dad19d6e-732d-42bd-97a5-b755b756aabb\") " pod="openshift-marketplace/certified-operators-rsvhx" Nov 25 08:08:23 crc kubenswrapper[5043]: I1125 08:08:23.487787 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad19d6e-732d-42bd-97a5-b755b756aabb-catalog-content\") pod \"certified-operators-rsvhx\" (UID: \"dad19d6e-732d-42bd-97a5-b755b756aabb\") " pod="openshift-marketplace/certified-operators-rsvhx" Nov 25 08:08:23 crc kubenswrapper[5043]: I1125 08:08:23.487817 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad19d6e-732d-42bd-97a5-b755b756aabb-utilities\") pod \"certified-operators-rsvhx\" (UID: \"dad19d6e-732d-42bd-97a5-b755b756aabb\") " pod="openshift-marketplace/certified-operators-rsvhx" Nov 25 08:08:23 crc kubenswrapper[5043]: I1125 08:08:23.511438 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjs6h\" (UniqueName: \"kubernetes.io/projected/dad19d6e-732d-42bd-97a5-b755b756aabb-kube-api-access-cjs6h\") pod \"certified-operators-rsvhx\" (UID: \"dad19d6e-732d-42bd-97a5-b755b756aabb\") " pod="openshift-marketplace/certified-operators-rsvhx" Nov 25 08:08:23 crc kubenswrapper[5043]: I1125 08:08:23.551003 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rsvhx" Nov 25 08:08:24 crc kubenswrapper[5043]: I1125 08:08:24.134191 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rsvhx"] Nov 25 08:08:24 crc kubenswrapper[5043]: I1125 08:08:24.629479 5043 generic.go:334] "Generic (PLEG): container finished" podID="dad19d6e-732d-42bd-97a5-b755b756aabb" containerID="a591098ff24270af67a1b62a4a3043e5db878fcaad853f75f7ba6be81f74a80b" exitCode=0 Nov 25 08:08:24 crc kubenswrapper[5043]: I1125 08:08:24.629630 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsvhx" event={"ID":"dad19d6e-732d-42bd-97a5-b755b756aabb","Type":"ContainerDied","Data":"a591098ff24270af67a1b62a4a3043e5db878fcaad853f75f7ba6be81f74a80b"} Nov 25 08:08:24 crc kubenswrapper[5043]: I1125 08:08:24.629877 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsvhx" event={"ID":"dad19d6e-732d-42bd-97a5-b755b756aabb","Type":"ContainerStarted","Data":"5d4a1e585119c65ded5a51da40887974ed6563f1fc9880fc7a2c83c7411c5ee7"} Nov 25 08:08:25 crc kubenswrapper[5043]: I1125 08:08:25.644026 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsvhx" event={"ID":"dad19d6e-732d-42bd-97a5-b755b756aabb","Type":"ContainerStarted","Data":"d376a965ed497ceec727d135861ae7978335b5fba2bc696981603325f6d14cf0"} Nov 25 08:08:26 crc kubenswrapper[5043]: I1125 08:08:26.656464 5043 generic.go:334] "Generic (PLEG): container finished" podID="dad19d6e-732d-42bd-97a5-b755b756aabb" containerID="d376a965ed497ceec727d135861ae7978335b5fba2bc696981603325f6d14cf0" exitCode=0 Nov 25 08:08:26 crc kubenswrapper[5043]: I1125 08:08:26.656506 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsvhx" event={"ID":"dad19d6e-732d-42bd-97a5-b755b756aabb","Type":"ContainerDied","Data":"d376a965ed497ceec727d135861ae7978335b5fba2bc696981603325f6d14cf0"} Nov 25 08:08:26 crc kubenswrapper[5043]: I1125 08:08:26.656531 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsvhx" event={"ID":"dad19d6e-732d-42bd-97a5-b755b756aabb","Type":"ContainerStarted","Data":"52979ca625fd58bceea13011c7e50c230a4f29a516e836f62cb144936eae814e"} Nov 25 08:08:26 crc kubenswrapper[5043]: I1125 08:08:26.678885 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rsvhx" podStartSLOduration=2.147139928 podStartE2EDuration="3.678869943s" podCreationTimestamp="2025-11-25 08:08:23 +0000 UTC" firstStartedPulling="2025-11-25 08:08:24.631337693 +0000 UTC m=+3168.799533424" lastFinishedPulling="2025-11-25 08:08:26.163067718 +0000 UTC m=+3170.331263439" observedRunningTime="2025-11-25 08:08:26.673638381 +0000 UTC m=+3170.841834102" watchObservedRunningTime="2025-11-25 08:08:26.678869943 +0000 UTC m=+3170.847065664" Nov 25 08:08:33 crc kubenswrapper[5043]: I1125 08:08:33.551809 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rsvhx" Nov 25 08:08:33 crc kubenswrapper[5043]: I1125 08:08:33.552366 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rsvhx" Nov 25 08:08:33 crc kubenswrapper[5043]: I1125 08:08:33.601799 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rsvhx" Nov 25 08:08:33 crc kubenswrapper[5043]: I1125 08:08:33.767219 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rsvhx" Nov 25 08:08:33 crc kubenswrapper[5043]: I1125 08:08:33.834596 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rsvhx"] Nov 25 08:08:34 crc kubenswrapper[5043]: I1125 08:08:34.963625 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:08:34 crc kubenswrapper[5043]: E1125 08:08:34.964258 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:08:35 crc kubenswrapper[5043]: I1125 08:08:35.733482 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rsvhx" podUID="dad19d6e-732d-42bd-97a5-b755b756aabb" containerName="registry-server" containerID="cri-o://52979ca625fd58bceea13011c7e50c230a4f29a516e836f62cb144936eae814e" gracePeriod=2 Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.653404 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rsvhx" Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.749924 5043 generic.go:334] "Generic (PLEG): container finished" podID="dad19d6e-732d-42bd-97a5-b755b756aabb" containerID="52979ca625fd58bceea13011c7e50c230a4f29a516e836f62cb144936eae814e" exitCode=0 Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.749968 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsvhx" event={"ID":"dad19d6e-732d-42bd-97a5-b755b756aabb","Type":"ContainerDied","Data":"52979ca625fd58bceea13011c7e50c230a4f29a516e836f62cb144936eae814e"} Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.749992 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rsvhx" event={"ID":"dad19d6e-732d-42bd-97a5-b755b756aabb","Type":"ContainerDied","Data":"5d4a1e585119c65ded5a51da40887974ed6563f1fc9880fc7a2c83c7411c5ee7"} Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.750010 5043 scope.go:117] "RemoveContainer" containerID="52979ca625fd58bceea13011c7e50c230a4f29a516e836f62cb144936eae814e" Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.750189 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rsvhx" Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.759489 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjs6h\" (UniqueName: \"kubernetes.io/projected/dad19d6e-732d-42bd-97a5-b755b756aabb-kube-api-access-cjs6h\") pod \"dad19d6e-732d-42bd-97a5-b755b756aabb\" (UID: \"dad19d6e-732d-42bd-97a5-b755b756aabb\") " Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.759639 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad19d6e-732d-42bd-97a5-b755b756aabb-catalog-content\") pod \"dad19d6e-732d-42bd-97a5-b755b756aabb\" (UID: \"dad19d6e-732d-42bd-97a5-b755b756aabb\") " Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.759785 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad19d6e-732d-42bd-97a5-b755b756aabb-utilities\") pod \"dad19d6e-732d-42bd-97a5-b755b756aabb\" (UID: \"dad19d6e-732d-42bd-97a5-b755b756aabb\") " Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.760437 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad19d6e-732d-42bd-97a5-b755b756aabb-utilities" (OuterVolumeSpecName: "utilities") pod "dad19d6e-732d-42bd-97a5-b755b756aabb" (UID: "dad19d6e-732d-42bd-97a5-b755b756aabb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.764441 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad19d6e-732d-42bd-97a5-b755b756aabb-kube-api-access-cjs6h" (OuterVolumeSpecName: "kube-api-access-cjs6h") pod "dad19d6e-732d-42bd-97a5-b755b756aabb" (UID: "dad19d6e-732d-42bd-97a5-b755b756aabb"). InnerVolumeSpecName "kube-api-access-cjs6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.772403 5043 scope.go:117] "RemoveContainer" containerID="d376a965ed497ceec727d135861ae7978335b5fba2bc696981603325f6d14cf0" Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.807155 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad19d6e-732d-42bd-97a5-b755b756aabb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dad19d6e-732d-42bd-97a5-b755b756aabb" (UID: "dad19d6e-732d-42bd-97a5-b755b756aabb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.822385 5043 scope.go:117] "RemoveContainer" containerID="a591098ff24270af67a1b62a4a3043e5db878fcaad853f75f7ba6be81f74a80b" Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.860439 5043 scope.go:117] "RemoveContainer" containerID="52979ca625fd58bceea13011c7e50c230a4f29a516e836f62cb144936eae814e" Nov 25 08:08:37 crc kubenswrapper[5043]: E1125 08:08:37.861263 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52979ca625fd58bceea13011c7e50c230a4f29a516e836f62cb144936eae814e\": container with ID starting with 52979ca625fd58bceea13011c7e50c230a4f29a516e836f62cb144936eae814e not found: ID does not exist" containerID="52979ca625fd58bceea13011c7e50c230a4f29a516e836f62cb144936eae814e" Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.861339 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52979ca625fd58bceea13011c7e50c230a4f29a516e836f62cb144936eae814e"} err="failed to get container status \"52979ca625fd58bceea13011c7e50c230a4f29a516e836f62cb144936eae814e\": rpc error: code = NotFound desc = could not find container \"52979ca625fd58bceea13011c7e50c230a4f29a516e836f62cb144936eae814e\": container with ID starting with 52979ca625fd58bceea13011c7e50c230a4f29a516e836f62cb144936eae814e not found: ID does not exist" Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.861381 5043 scope.go:117] "RemoveContainer" containerID="d376a965ed497ceec727d135861ae7978335b5fba2bc696981603325f6d14cf0" Nov 25 08:08:37 crc kubenswrapper[5043]: E1125 08:08:37.861773 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d376a965ed497ceec727d135861ae7978335b5fba2bc696981603325f6d14cf0\": container with ID starting with d376a965ed497ceec727d135861ae7978335b5fba2bc696981603325f6d14cf0 not found: ID does not exist" containerID="d376a965ed497ceec727d135861ae7978335b5fba2bc696981603325f6d14cf0" Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.861809 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d376a965ed497ceec727d135861ae7978335b5fba2bc696981603325f6d14cf0"} err="failed to get container status \"d376a965ed497ceec727d135861ae7978335b5fba2bc696981603325f6d14cf0\": rpc error: code = NotFound desc = could not find container \"d376a965ed497ceec727d135861ae7978335b5fba2bc696981603325f6d14cf0\": container with ID starting with d376a965ed497ceec727d135861ae7978335b5fba2bc696981603325f6d14cf0 not found: ID does not exist" Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.861839 5043 scope.go:117] "RemoveContainer" containerID="a591098ff24270af67a1b62a4a3043e5db878fcaad853f75f7ba6be81f74a80b" Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.862143 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjs6h\" (UniqueName: \"kubernetes.io/projected/dad19d6e-732d-42bd-97a5-b755b756aabb-kube-api-access-cjs6h\") on node \"crc\" DevicePath \"\"" Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.862263 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad19d6e-732d-42bd-97a5-b755b756aabb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.862351 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad19d6e-732d-42bd-97a5-b755b756aabb-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:08:37 crc kubenswrapper[5043]: E1125 08:08:37.862143 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a591098ff24270af67a1b62a4a3043e5db878fcaad853f75f7ba6be81f74a80b\": container with ID starting with a591098ff24270af67a1b62a4a3043e5db878fcaad853f75f7ba6be81f74a80b not found: ID does not exist" containerID="a591098ff24270af67a1b62a4a3043e5db878fcaad853f75f7ba6be81f74a80b" Nov 25 08:08:37 crc kubenswrapper[5043]: I1125 08:08:37.862466 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a591098ff24270af67a1b62a4a3043e5db878fcaad853f75f7ba6be81f74a80b"} err="failed to get container status \"a591098ff24270af67a1b62a4a3043e5db878fcaad853f75f7ba6be81f74a80b\": rpc error: code = NotFound desc = could not find container \"a591098ff24270af67a1b62a4a3043e5db878fcaad853f75f7ba6be81f74a80b\": container with ID starting with a591098ff24270af67a1b62a4a3043e5db878fcaad853f75f7ba6be81f74a80b not found: ID does not exist" Nov 25 08:08:38 crc kubenswrapper[5043]: I1125 08:08:38.084642 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rsvhx"] Nov 25 08:08:38 crc kubenswrapper[5043]: I1125 08:08:38.093304 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rsvhx"] Nov 25 08:08:38 crc kubenswrapper[5043]: I1125 08:08:38.976486 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad19d6e-732d-42bd-97a5-b755b756aabb" path="/var/lib/kubelet/pods/dad19d6e-732d-42bd-97a5-b755b756aabb/volumes" Nov 25 08:08:47 crc kubenswrapper[5043]: I1125 08:08:47.964028 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:08:47 crc kubenswrapper[5043]: E1125 08:08:47.965468 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:09:02 crc kubenswrapper[5043]: I1125 08:09:02.962733 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:09:02 crc kubenswrapper[5043]: E1125 08:09:02.963502 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:09:17 crc kubenswrapper[5043]: I1125 08:09:17.962922 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:09:19 crc kubenswrapper[5043]: I1125 08:09:19.109484 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"18343daeda377d7ada554d8d6723fd48b9d69c9bb0127c1bc82ab8875fb2058e"} Nov 25 08:09:57 crc kubenswrapper[5043]: I1125 08:09:57.442173 5043 generic.go:334] "Generic (PLEG): container finished" podID="ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada" containerID="f14c52d1b2ca297e52b1bfc179e23925377717efd6d5604fef138128a0dae37e" exitCode=0 Nov 25 08:09:57 crc kubenswrapper[5043]: I1125 08:09:57.442248 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" event={"ID":"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada","Type":"ContainerDied","Data":"f14c52d1b2ca297e52b1bfc179e23925377717efd6d5604fef138128a0dae37e"} Nov 25 08:09:58 crc kubenswrapper[5043]: I1125 08:09:58.912957 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.032034 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-extra-config-0\") pod \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.032122 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-inventory\") pod \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.032183 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-migration-ssh-key-1\") pod \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.032258 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ssh-key\") pod \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.032293 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-migration-ssh-key-0\") pod \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.032375 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g745f\" (UniqueName: \"kubernetes.io/projected/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-kube-api-access-g745f\") pod \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.032407 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ceph\") pod \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.032442 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-cell1-compute-config-0\") pod \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.032490 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ceph-nova-0\") pod \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.032574 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-cell1-compute-config-1\") pod \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.032646 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-custom-ceph-combined-ca-bundle\") pod \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\" (UID: \"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada\") " Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.039810 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada" (UID: "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.039911 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-kube-api-access-g745f" (OuterVolumeSpecName: "kube-api-access-g745f") pod "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada" (UID: "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada"). InnerVolumeSpecName "kube-api-access-g745f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.046040 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ceph" (OuterVolumeSpecName: "ceph") pod "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada" (UID: "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.066687 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada" (UID: "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.068926 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada" (UID: "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.071112 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada" (UID: "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.072086 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada" (UID: "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.074760 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada" (UID: "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.075454 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada" (UID: "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.077170 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada" (UID: "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.078050 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-inventory" (OuterVolumeSpecName: "inventory") pod "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada" (UID: "ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.135438 5043 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.135483 5043 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.135496 5043 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.135508 5043 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.135516 5043 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.135527 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.135535 5043 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.135544 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g745f\" (UniqueName: \"kubernetes.io/projected/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-kube-api-access-g745f\") on node \"crc\" DevicePath \"\"" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.135552 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.135560 5043 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.135568 5043 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.461933 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" event={"ID":"ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada","Type":"ContainerDied","Data":"0fa89d9053451463b1b3862551890f2ef0144a8a1379352c39b39fcde72dd3ce"} Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.461971 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fa89d9053451463b1b3862551890f2ef0144a8a1379352c39b39fcde72dd3ce" Nov 25 08:09:59 crc kubenswrapper[5043]: I1125 08:09:59.462000 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.362652 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 25 08:10:14 crc kubenswrapper[5043]: E1125 08:10:14.363587 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad19d6e-732d-42bd-97a5-b755b756aabb" containerName="registry-server" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.363615 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad19d6e-732d-42bd-97a5-b755b756aabb" containerName="registry-server" Nov 25 08:10:14 crc kubenswrapper[5043]: E1125 08:10:14.363647 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad19d6e-732d-42bd-97a5-b755b756aabb" containerName="extract-utilities" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.363653 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad19d6e-732d-42bd-97a5-b755b756aabb" containerName="extract-utilities" Nov 25 08:10:14 crc kubenswrapper[5043]: E1125 08:10:14.363665 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad19d6e-732d-42bd-97a5-b755b756aabb" containerName="extract-content" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.363671 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad19d6e-732d-42bd-97a5-b755b756aabb" containerName="extract-content" Nov 25 08:10:14 crc kubenswrapper[5043]: E1125 08:10:14.363684 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.363691 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.363904 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.363926 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad19d6e-732d-42bd-97a5-b755b756aabb" containerName="registry-server" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.364992 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.375328 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.375855 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.409933 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.420700 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.420777 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.420804 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.420828 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.420849 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.420865 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hpq8\" (UniqueName: \"kubernetes.io/projected/c15deb32-5994-4bf0-bd30-1a309d58f82c-kube-api-access-2hpq8\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.420883 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-run\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.420899 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-dev\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.420929 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-sys\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.420964 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.420985 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.421005 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c15deb32-5994-4bf0-bd30-1a309d58f82c-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.421026 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c15deb32-5994-4bf0-bd30-1a309d58f82c-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.421057 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c15deb32-5994-4bf0-bd30-1a309d58f82c-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.421081 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15deb32-5994-4bf0-bd30-1a309d58f82c-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.421123 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c15deb32-5994-4bf0-bd30-1a309d58f82c-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.467421 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.474186 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.480177 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.501969 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.524520 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.524628 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/850ff79f-0c56-4cc9-be55-a76979fc1ac8-scripts\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.524655 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.524682 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c15deb32-5994-4bf0-bd30-1a309d58f82c-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.524722 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-lib-modules\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.524753 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c15deb32-5994-4bf0-bd30-1a309d58f82c-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.524801 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c15deb32-5994-4bf0-bd30-1a309d58f82c-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.524827 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-run\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.524854 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850ff79f-0c56-4cc9-be55-a76979fc1ac8-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.524877 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15deb32-5994-4bf0-bd30-1a309d58f82c-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.524946 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/850ff79f-0c56-4cc9-be55-a76979fc1ac8-config-data-custom\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.524971 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-etc-nvme\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.524993 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c15deb32-5994-4bf0-bd30-1a309d58f82c-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525024 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525060 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525091 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525121 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525170 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525195 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525224 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525245 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-run\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525266 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hpq8\" (UniqueName: \"kubernetes.io/projected/c15deb32-5994-4bf0-bd30-1a309d58f82c-kube-api-access-2hpq8\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525288 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-dev\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525308 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525332 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-sys\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525369 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-sys\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525392 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-dev\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525414 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8tff\" (UniqueName: \"kubernetes.io/projected/850ff79f-0c56-4cc9-be55-a76979fc1ac8-kube-api-access-n8tff\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525442 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/850ff79f-0c56-4cc9-be55-a76979fc1ac8-ceph\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525462 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525489 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525511 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850ff79f-0c56-4cc9-be55-a76979fc1ac8-config-data\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525648 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.525832 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.531000 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.531840 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.531918 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-run\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.540718 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-dev\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.541665 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.543062 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.543241 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.543877 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c15deb32-5994-4bf0-bd30-1a309d58f82c-sys\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.547322 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c15deb32-5994-4bf0-bd30-1a309d58f82c-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.548262 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c15deb32-5994-4bf0-bd30-1a309d58f82c-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.550554 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15deb32-5994-4bf0-bd30-1a309d58f82c-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.550809 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hpq8\" (UniqueName: \"kubernetes.io/projected/c15deb32-5994-4bf0-bd30-1a309d58f82c-kube-api-access-2hpq8\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.550896 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c15deb32-5994-4bf0-bd30-1a309d58f82c-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.552189 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c15deb32-5994-4bf0-bd30-1a309d58f82c-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"c15deb32-5994-4bf0-bd30-1a309d58f82c\") " pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.626844 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/850ff79f-0c56-4cc9-be55-a76979fc1ac8-config-data-custom\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.627169 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-etc-nvme\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.627200 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.627253 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.627278 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.627298 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-sys\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.627326 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-dev\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.627342 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8tff\" (UniqueName: \"kubernetes.io/projected/850ff79f-0c56-4cc9-be55-a76979fc1ac8-kube-api-access-n8tff\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.627361 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/850ff79f-0c56-4cc9-be55-a76979fc1ac8-ceph\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.627377 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.627396 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.627415 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850ff79f-0c56-4cc9-be55-a76979fc1ac8-config-data\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.627432 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/850ff79f-0c56-4cc9-be55-a76979fc1ac8-scripts\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.627453 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-lib-modules\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.627480 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-run\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.627494 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850ff79f-0c56-4cc9-be55-a76979fc1ac8-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.628205 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.628392 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.628420 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-sys\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.628468 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-dev\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.628501 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-etc-nvme\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.628546 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.628853 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-lib-modules\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.629013 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.629077 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-run\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.629102 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/850ff79f-0c56-4cc9-be55-a76979fc1ac8-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.631738 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/850ff79f-0c56-4cc9-be55-a76979fc1ac8-ceph\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.631825 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/850ff79f-0c56-4cc9-be55-a76979fc1ac8-scripts\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.632330 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850ff79f-0c56-4cc9-be55-a76979fc1ac8-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.632480 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850ff79f-0c56-4cc9-be55-a76979fc1ac8-config-data\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.632880 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/850ff79f-0c56-4cc9-be55-a76979fc1ac8-config-data-custom\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.655854 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8tff\" (UniqueName: \"kubernetes.io/projected/850ff79f-0c56-4cc9-be55-a76979fc1ac8-kube-api-access-n8tff\") pod \"cinder-backup-0\" (UID: \"850ff79f-0c56-4cc9-be55-a76979fc1ac8\") " pod="openstack/cinder-backup-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.694990 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:14 crc kubenswrapper[5043]: I1125 08:10:14.799248 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.088778 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-lbw85"] Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.090651 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-lbw85" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.110420 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-lbw85"] Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.146479 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42pk4\" (UniqueName: \"kubernetes.io/projected/3b9196a7-33c6-4492-957a-ed5aa71eceb8-kube-api-access-42pk4\") pod \"manila-db-create-lbw85\" (UID: \"3b9196a7-33c6-4492-957a-ed5aa71eceb8\") " pod="openstack/manila-db-create-lbw85" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.146663 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b9196a7-33c6-4492-957a-ed5aa71eceb8-operator-scripts\") pod \"manila-db-create-lbw85\" (UID: \"3b9196a7-33c6-4492-957a-ed5aa71eceb8\") " pod="openstack/manila-db-create-lbw85" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.178673 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.180404 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.188775 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.189240 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.189395 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vvnvv" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.189541 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.204665 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.221615 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-3f80-account-create-rmltq"] Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.223192 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3f80-account-create-rmltq" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.234161 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.237204 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-3f80-account-create-rmltq"] Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.248038 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skf5s\" (UniqueName: \"kubernetes.io/projected/e062dd66-19a1-44b9-834b-ff85c094ab5f-kube-api-access-skf5s\") pod \"manila-3f80-account-create-rmltq\" (UID: \"e062dd66-19a1-44b9-834b-ff85c094ab5f\") " pod="openstack/manila-3f80-account-create-rmltq" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.248320 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b9196a7-33c6-4492-957a-ed5aa71eceb8-operator-scripts\") pod \"manila-db-create-lbw85\" (UID: \"3b9196a7-33c6-4492-957a-ed5aa71eceb8\") " pod="openstack/manila-db-create-lbw85" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.248765 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e062dd66-19a1-44b9-834b-ff85c094ab5f-operator-scripts\") pod \"manila-3f80-account-create-rmltq\" (UID: \"e062dd66-19a1-44b9-834b-ff85c094ab5f\") " pod="openstack/manila-3f80-account-create-rmltq" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.248831 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42pk4\" (UniqueName: \"kubernetes.io/projected/3b9196a7-33c6-4492-957a-ed5aa71eceb8-kube-api-access-42pk4\") pod \"manila-db-create-lbw85\" (UID: \"3b9196a7-33c6-4492-957a-ed5aa71eceb8\") " pod="openstack/manila-db-create-lbw85" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.249444 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b9196a7-33c6-4492-957a-ed5aa71eceb8-operator-scripts\") pod \"manila-db-create-lbw85\" (UID: \"3b9196a7-33c6-4492-957a-ed5aa71eceb8\") " pod="openstack/manila-db-create-lbw85" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.274493 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42pk4\" (UniqueName: \"kubernetes.io/projected/3b9196a7-33c6-4492-957a-ed5aa71eceb8-kube-api-access-42pk4\") pod \"manila-db-create-lbw85\" (UID: \"3b9196a7-33c6-4492-957a-ed5aa71eceb8\") " pod="openstack/manila-db-create-lbw85" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.293533 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.296687 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.301803 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.302105 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.306596 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.350297 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e062dd66-19a1-44b9-834b-ff85c094ab5f-operator-scripts\") pod \"manila-3f80-account-create-rmltq\" (UID: \"e062dd66-19a1-44b9-834b-ff85c094ab5f\") " pod="openstack/manila-3f80-account-create-rmltq" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.350368 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.350399 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-config-data\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.350428 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.350452 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-logs\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.350493 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-scripts\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.350521 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-ceph\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.350564 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skf5s\" (UniqueName: \"kubernetes.io/projected/e062dd66-19a1-44b9-834b-ff85c094ab5f-kube-api-access-skf5s\") pod \"manila-3f80-account-create-rmltq\" (UID: \"e062dd66-19a1-44b9-834b-ff85c094ab5f\") " pod="openstack/manila-3f80-account-create-rmltq" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.350627 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zlr5\" (UniqueName: \"kubernetes.io/projected/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-kube-api-access-6zlr5\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.350699 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.350737 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.351216 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e062dd66-19a1-44b9-834b-ff85c094ab5f-operator-scripts\") pod \"manila-3f80-account-create-rmltq\" (UID: \"e062dd66-19a1-44b9-834b-ff85c094ab5f\") " pod="openstack/manila-3f80-account-create-rmltq" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.353498 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 25 08:10:15 crc kubenswrapper[5043]: W1125 08:10:15.354851 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc15deb32_5994_4bf0_bd30_1a309d58f82c.slice/crio-afa7e42e2a1b004cd6692739c538580dee51df8195db0fd70c88797493c8fcf7 WatchSource:0}: Error finding container afa7e42e2a1b004cd6692739c538580dee51df8195db0fd70c88797493c8fcf7: Status 404 returned error can't find the container with id afa7e42e2a1b004cd6692739c538580dee51df8195db0fd70c88797493c8fcf7 Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.371241 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skf5s\" (UniqueName: \"kubernetes.io/projected/e062dd66-19a1-44b9-834b-ff85c094ab5f-kube-api-access-skf5s\") pod \"manila-3f80-account-create-rmltq\" (UID: \"e062dd66-19a1-44b9-834b-ff85c094ab5f\") " pod="openstack/manila-3f80-account-create-rmltq" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.427045 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-lbw85" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.433920 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 25 08:10:15 crc kubenswrapper[5043]: W1125 08:10:15.448048 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod850ff79f_0c56_4cc9_be55_a76979fc1ac8.slice/crio-858fb3c7ccc30ee1a75f86c25421bbe311c93496c9fb1bdc34778fca4973da59 WatchSource:0}: Error finding container 858fb3c7ccc30ee1a75f86c25421bbe311c93496c9fb1bdc34778fca4973da59: Status 404 returned error can't find the container with id 858fb3c7ccc30ee1a75f86c25421bbe311c93496c9fb1bdc34778fca4973da59 Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.453443 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0f9823-3037-49ba-8bbe-7384b6988f53-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.453498 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkrx7\" (UniqueName: \"kubernetes.io/projected/de0f9823-3037-49ba-8bbe-7384b6988f53-kube-api-access-qkrx7\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.453534 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0f9823-3037-49ba-8bbe-7384b6988f53-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.453701 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.453766 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-config-data\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.453801 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.453838 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-logs\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.453885 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.453929 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0f9823-3037-49ba-8bbe-7384b6988f53-logs\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.453969 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-scripts\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.454022 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-ceph\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.454052 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/de0f9823-3037-49ba-8bbe-7384b6988f53-ceph\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.454111 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de0f9823-3037-49ba-8bbe-7384b6988f53-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.454196 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zlr5\" (UniqueName: \"kubernetes.io/projected/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-kube-api-access-6zlr5\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.454238 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de0f9823-3037-49ba-8bbe-7384b6988f53-scripts\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.454283 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0f9823-3037-49ba-8bbe-7384b6988f53-config-data\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.454279 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.454340 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-logs\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.454626 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.454727 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.455309 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.457526 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-ceph\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.458453 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-scripts\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.460102 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-config-data\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.460834 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.462784 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.473799 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zlr5\" (UniqueName: \"kubernetes.io/projected/b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c-kube-api-access-6zlr5\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.491785 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c\") " pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.505206 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.553822 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3f80-account-create-rmltq" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.558342 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de0f9823-3037-49ba-8bbe-7384b6988f53-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.558409 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de0f9823-3037-49ba-8bbe-7384b6988f53-scripts\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.558434 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0f9823-3037-49ba-8bbe-7384b6988f53-config-data\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.558512 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0f9823-3037-49ba-8bbe-7384b6988f53-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.558541 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkrx7\" (UniqueName: \"kubernetes.io/projected/de0f9823-3037-49ba-8bbe-7384b6988f53-kube-api-access-qkrx7\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.558568 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0f9823-3037-49ba-8bbe-7384b6988f53-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.558656 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.558681 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0f9823-3037-49ba-8bbe-7384b6988f53-logs\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.558715 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/de0f9823-3037-49ba-8bbe-7384b6988f53-ceph\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.558935 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de0f9823-3037-49ba-8bbe-7384b6988f53-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.561118 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.564028 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0f9823-3037-49ba-8bbe-7384b6988f53-logs\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.565496 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/de0f9823-3037-49ba-8bbe-7384b6988f53-ceph\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.566000 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0f9823-3037-49ba-8bbe-7384b6988f53-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.569085 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0f9823-3037-49ba-8bbe-7384b6988f53-config-data\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.575483 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de0f9823-3037-49ba-8bbe-7384b6988f53-scripts\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.590600 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0f9823-3037-49ba-8bbe-7384b6988f53-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.594158 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkrx7\" (UniqueName: \"kubernetes.io/projected/de0f9823-3037-49ba-8bbe-7384b6988f53-kube-api-access-qkrx7\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.620488 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"de0f9823-3037-49ba-8bbe-7384b6988f53\") " pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.639343 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"850ff79f-0c56-4cc9-be55-a76979fc1ac8","Type":"ContainerStarted","Data":"858fb3c7ccc30ee1a75f86c25421bbe311c93496c9fb1bdc34778fca4973da59"} Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.643080 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"c15deb32-5994-4bf0-bd30-1a309d58f82c","Type":"ContainerStarted","Data":"afa7e42e2a1b004cd6692739c538580dee51df8195db0fd70c88797493c8fcf7"} Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.919065 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.924397 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-lbw85"] Nov 25 08:10:15 crc kubenswrapper[5043]: I1125 08:10:15.992114 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-3f80-account-create-rmltq"] Nov 25 08:10:16 crc kubenswrapper[5043]: I1125 08:10:16.070564 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 08:10:16 crc kubenswrapper[5043]: I1125 08:10:16.560652 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 08:10:16 crc kubenswrapper[5043]: I1125 08:10:16.700217 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3f80-account-create-rmltq" event={"ID":"e062dd66-19a1-44b9-834b-ff85c094ab5f","Type":"ContainerDied","Data":"7326d1a74a041b5e7812f8e35b51c1b790172b5d8faa18205d6f6ab485fa11d6"} Nov 25 08:10:16 crc kubenswrapper[5043]: I1125 08:10:16.699948 5043 generic.go:334] "Generic (PLEG): container finished" podID="e062dd66-19a1-44b9-834b-ff85c094ab5f" containerID="7326d1a74a041b5e7812f8e35b51c1b790172b5d8faa18205d6f6ab485fa11d6" exitCode=0 Nov 25 08:10:16 crc kubenswrapper[5043]: I1125 08:10:16.702126 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3f80-account-create-rmltq" event={"ID":"e062dd66-19a1-44b9-834b-ff85c094ab5f","Type":"ContainerStarted","Data":"81e245e233a567cdb8e07e635079a681a42487e57f8cf24ca99b5c7cbc27674a"} Nov 25 08:10:16 crc kubenswrapper[5043]: I1125 08:10:16.703852 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c","Type":"ContainerStarted","Data":"3a32dc5cbe2ae7061a3cb1d599b15dd538adc3be8f2b78e50af909d5b3937622"} Nov 25 08:10:16 crc kubenswrapper[5043]: I1125 08:10:16.708089 5043 generic.go:334] "Generic (PLEG): container finished" podID="3b9196a7-33c6-4492-957a-ed5aa71eceb8" containerID="01f1f894a7c0c68d47415b4c0373e55cb49380fe759c4b79a83d1268394425b8" exitCode=0 Nov 25 08:10:16 crc kubenswrapper[5043]: I1125 08:10:16.708164 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-lbw85" event={"ID":"3b9196a7-33c6-4492-957a-ed5aa71eceb8","Type":"ContainerDied","Data":"01f1f894a7c0c68d47415b4c0373e55cb49380fe759c4b79a83d1268394425b8"} Nov 25 08:10:16 crc kubenswrapper[5043]: I1125 08:10:16.708199 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-lbw85" event={"ID":"3b9196a7-33c6-4492-957a-ed5aa71eceb8","Type":"ContainerStarted","Data":"ed6b4c519ef2635ed08f80666f8468a35335c0fc803a50c651d7fd9f10669094"} Nov 25 08:10:17 crc kubenswrapper[5043]: I1125 08:10:17.722492 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"c15deb32-5994-4bf0-bd30-1a309d58f82c","Type":"ContainerStarted","Data":"8c73471b5aa0ffb3e5d0c7f15ff65e5dfa6e827fb824b44f1879b655c051ca74"} Nov 25 08:10:17 crc kubenswrapper[5043]: I1125 08:10:17.723118 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"c15deb32-5994-4bf0-bd30-1a309d58f82c","Type":"ContainerStarted","Data":"abbc931c3479e1c3016e665b9131eecab50a6695f2b8bef8294b4e09d206753c"} Nov 25 08:10:17 crc kubenswrapper[5043]: I1125 08:10:17.725684 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"850ff79f-0c56-4cc9-be55-a76979fc1ac8","Type":"ContainerStarted","Data":"b222cc19fd9d6106ce31aac66ab1f976d75959037abf67b6707e1edd4d58350e"} Nov 25 08:10:17 crc kubenswrapper[5043]: I1125 08:10:17.725734 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"850ff79f-0c56-4cc9-be55-a76979fc1ac8","Type":"ContainerStarted","Data":"1f6f19e51c02d2e0981a355fb78de18c0dec39e9a8bf6a7e7510c2919f02c506"} Nov 25 08:10:17 crc kubenswrapper[5043]: I1125 08:10:17.730673 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de0f9823-3037-49ba-8bbe-7384b6988f53","Type":"ContainerStarted","Data":"8b31da8acc06fb4b886f03d4e18c68cb030038e57e999dd7d1dfa956fc8efdde"} Nov 25 08:10:17 crc kubenswrapper[5043]: I1125 08:10:17.730724 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de0f9823-3037-49ba-8bbe-7384b6988f53","Type":"ContainerStarted","Data":"031f1e15b476b6d077772ec63e7a27c60f193772356fcdda9914cca4ad51b02f"} Nov 25 08:10:17 crc kubenswrapper[5043]: I1125 08:10:17.733687 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c","Type":"ContainerStarted","Data":"f443a0c3b4f1e9fd278043b7b038e880c7d151993c7b2bee1ffaf6db68f19297"} Nov 25 08:10:17 crc kubenswrapper[5043]: I1125 08:10:17.780951 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.406217684 podStartE2EDuration="3.780931257s" podCreationTimestamp="2025-11-25 08:10:14 +0000 UTC" firstStartedPulling="2025-11-25 08:10:15.361383806 +0000 UTC m=+3279.529579527" lastFinishedPulling="2025-11-25 08:10:16.736097379 +0000 UTC m=+3280.904293100" observedRunningTime="2025-11-25 08:10:17.755800276 +0000 UTC m=+3281.923996017" watchObservedRunningTime="2025-11-25 08:10:17.780931257 +0000 UTC m=+3281.949126978" Nov 25 08:10:17 crc kubenswrapper[5043]: I1125 08:10:17.788405 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.540050923 podStartE2EDuration="3.788378629s" podCreationTimestamp="2025-11-25 08:10:14 +0000 UTC" firstStartedPulling="2025-11-25 08:10:15.455418736 +0000 UTC m=+3279.623614457" lastFinishedPulling="2025-11-25 08:10:16.703746442 +0000 UTC m=+3280.871942163" observedRunningTime="2025-11-25 08:10:17.777869954 +0000 UTC m=+3281.946065695" watchObservedRunningTime="2025-11-25 08:10:17.788378629 +0000 UTC m=+3281.956574350" Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.204815 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-lbw85" Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.243471 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3f80-account-create-rmltq" Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.335821 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skf5s\" (UniqueName: \"kubernetes.io/projected/e062dd66-19a1-44b9-834b-ff85c094ab5f-kube-api-access-skf5s\") pod \"e062dd66-19a1-44b9-834b-ff85c094ab5f\" (UID: \"e062dd66-19a1-44b9-834b-ff85c094ab5f\") " Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.335947 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b9196a7-33c6-4492-957a-ed5aa71eceb8-operator-scripts\") pod \"3b9196a7-33c6-4492-957a-ed5aa71eceb8\" (UID: \"3b9196a7-33c6-4492-957a-ed5aa71eceb8\") " Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.336027 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e062dd66-19a1-44b9-834b-ff85c094ab5f-operator-scripts\") pod \"e062dd66-19a1-44b9-834b-ff85c094ab5f\" (UID: \"e062dd66-19a1-44b9-834b-ff85c094ab5f\") " Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.336051 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42pk4\" (UniqueName: \"kubernetes.io/projected/3b9196a7-33c6-4492-957a-ed5aa71eceb8-kube-api-access-42pk4\") pod \"3b9196a7-33c6-4492-957a-ed5aa71eceb8\" (UID: \"3b9196a7-33c6-4492-957a-ed5aa71eceb8\") " Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.337899 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b9196a7-33c6-4492-957a-ed5aa71eceb8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b9196a7-33c6-4492-957a-ed5aa71eceb8" (UID: "3b9196a7-33c6-4492-957a-ed5aa71eceb8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.337943 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e062dd66-19a1-44b9-834b-ff85c094ab5f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e062dd66-19a1-44b9-834b-ff85c094ab5f" (UID: "e062dd66-19a1-44b9-834b-ff85c094ab5f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.342258 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b9196a7-33c6-4492-957a-ed5aa71eceb8-kube-api-access-42pk4" (OuterVolumeSpecName: "kube-api-access-42pk4") pod "3b9196a7-33c6-4492-957a-ed5aa71eceb8" (UID: "3b9196a7-33c6-4492-957a-ed5aa71eceb8"). InnerVolumeSpecName "kube-api-access-42pk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.342742 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e062dd66-19a1-44b9-834b-ff85c094ab5f-kube-api-access-skf5s" (OuterVolumeSpecName: "kube-api-access-skf5s") pod "e062dd66-19a1-44b9-834b-ff85c094ab5f" (UID: "e062dd66-19a1-44b9-834b-ff85c094ab5f"). InnerVolumeSpecName "kube-api-access-skf5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.438635 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e062dd66-19a1-44b9-834b-ff85c094ab5f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.438678 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42pk4\" (UniqueName: \"kubernetes.io/projected/3b9196a7-33c6-4492-957a-ed5aa71eceb8-kube-api-access-42pk4\") on node \"crc\" DevicePath \"\"" Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.438706 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skf5s\" (UniqueName: \"kubernetes.io/projected/e062dd66-19a1-44b9-834b-ff85c094ab5f-kube-api-access-skf5s\") on node \"crc\" DevicePath \"\"" Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.438717 5043 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b9196a7-33c6-4492-957a-ed5aa71eceb8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.752084 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3f80-account-create-rmltq" Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.752071 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3f80-account-create-rmltq" event={"ID":"e062dd66-19a1-44b9-834b-ff85c094ab5f","Type":"ContainerDied","Data":"81e245e233a567cdb8e07e635079a681a42487e57f8cf24ca99b5c7cbc27674a"} Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.753764 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81e245e233a567cdb8e07e635079a681a42487e57f8cf24ca99b5c7cbc27674a" Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.754241 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"de0f9823-3037-49ba-8bbe-7384b6988f53","Type":"ContainerStarted","Data":"1de3193e25e2fe96ec765e6851957d55887105f0555e154d0032ccd5795965da"} Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.765044 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c","Type":"ContainerStarted","Data":"e8118e43b4a0082ac5065ad6ea4c04265e13f7d7cb6b7f4ddae3793ba4af9a12"} Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.767808 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-lbw85" event={"ID":"3b9196a7-33c6-4492-957a-ed5aa71eceb8","Type":"ContainerDied","Data":"ed6b4c519ef2635ed08f80666f8468a35335c0fc803a50c651d7fd9f10669094"} Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.767976 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed6b4c519ef2635ed08f80666f8468a35335c0fc803a50c651d7fd9f10669094" Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.768167 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-lbw85" Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.799076 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.799055131 podStartE2EDuration="4.799055131s" podCreationTimestamp="2025-11-25 08:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:10:18.781963348 +0000 UTC m=+3282.950159079" watchObservedRunningTime="2025-11-25 08:10:18.799055131 +0000 UTC m=+3282.967250852" Nov 25 08:10:18 crc kubenswrapper[5043]: I1125 08:10:18.819806 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.819780533 podStartE2EDuration="4.819780533s" podCreationTimestamp="2025-11-25 08:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:10:18.817987645 +0000 UTC m=+3282.986183366" watchObservedRunningTime="2025-11-25 08:10:18.819780533 +0000 UTC m=+3282.987976274" Nov 25 08:10:19 crc kubenswrapper[5043]: I1125 08:10:19.696119 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:19 crc kubenswrapper[5043]: I1125 08:10:19.800367 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.641812 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-mrbzh"] Nov 25 08:10:20 crc kubenswrapper[5043]: E1125 08:10:20.642317 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e062dd66-19a1-44b9-834b-ff85c094ab5f" containerName="mariadb-account-create" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.642343 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e062dd66-19a1-44b9-834b-ff85c094ab5f" containerName="mariadb-account-create" Nov 25 08:10:20 crc kubenswrapper[5043]: E1125 08:10:20.642361 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b9196a7-33c6-4492-957a-ed5aa71eceb8" containerName="mariadb-database-create" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.642369 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b9196a7-33c6-4492-957a-ed5aa71eceb8" containerName="mariadb-database-create" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.642599 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="e062dd66-19a1-44b9-834b-ff85c094ab5f" containerName="mariadb-account-create" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.642635 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b9196a7-33c6-4492-957a-ed5aa71eceb8" containerName="mariadb-database-create" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.643366 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-mrbzh" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.645046 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.647293 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-k7ztd" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.665469 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-mrbzh"] Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.685941 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-combined-ca-bundle\") pod \"manila-db-sync-mrbzh\" (UID: \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\") " pod="openstack/manila-db-sync-mrbzh" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.685983 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-job-config-data\") pod \"manila-db-sync-mrbzh\" (UID: \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\") " pod="openstack/manila-db-sync-mrbzh" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.686361 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-config-data\") pod \"manila-db-sync-mrbzh\" (UID: \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\") " pod="openstack/manila-db-sync-mrbzh" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.686495 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd9sc\" (UniqueName: \"kubernetes.io/projected/8971f2a4-499c-4de4-b3f8-aebadd052ef7-kube-api-access-nd9sc\") pod \"manila-db-sync-mrbzh\" (UID: \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\") " pod="openstack/manila-db-sync-mrbzh" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.788052 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-config-data\") pod \"manila-db-sync-mrbzh\" (UID: \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\") " pod="openstack/manila-db-sync-mrbzh" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.788376 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd9sc\" (UniqueName: \"kubernetes.io/projected/8971f2a4-499c-4de4-b3f8-aebadd052ef7-kube-api-access-nd9sc\") pod \"manila-db-sync-mrbzh\" (UID: \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\") " pod="openstack/manila-db-sync-mrbzh" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.788420 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-combined-ca-bundle\") pod \"manila-db-sync-mrbzh\" (UID: \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\") " pod="openstack/manila-db-sync-mrbzh" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.788441 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-job-config-data\") pod \"manila-db-sync-mrbzh\" (UID: \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\") " pod="openstack/manila-db-sync-mrbzh" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.794462 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-job-config-data\") pod \"manila-db-sync-mrbzh\" (UID: \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\") " pod="openstack/manila-db-sync-mrbzh" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.808487 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-combined-ca-bundle\") pod \"manila-db-sync-mrbzh\" (UID: \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\") " pod="openstack/manila-db-sync-mrbzh" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.808990 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-config-data\") pod \"manila-db-sync-mrbzh\" (UID: \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\") " pod="openstack/manila-db-sync-mrbzh" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.810271 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd9sc\" (UniqueName: \"kubernetes.io/projected/8971f2a4-499c-4de4-b3f8-aebadd052ef7-kube-api-access-nd9sc\") pod \"manila-db-sync-mrbzh\" (UID: \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\") " pod="openstack/manila-db-sync-mrbzh" Nov 25 08:10:20 crc kubenswrapper[5043]: I1125 08:10:20.973397 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-mrbzh" Nov 25 08:10:21 crc kubenswrapper[5043]: I1125 08:10:21.590230 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-mrbzh"] Nov 25 08:10:21 crc kubenswrapper[5043]: I1125 08:10:21.798001 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-mrbzh" event={"ID":"8971f2a4-499c-4de4-b3f8-aebadd052ef7","Type":"ContainerStarted","Data":"d353d7775f4e9314bed3f8f3aac27fe60699a09e530307fcf0fac5f18650bb76"} Nov 25 08:10:24 crc kubenswrapper[5043]: I1125 08:10:24.904048 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Nov 25 08:10:25 crc kubenswrapper[5043]: I1125 08:10:25.057461 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Nov 25 08:10:25 crc kubenswrapper[5043]: I1125 08:10:25.506267 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 08:10:25 crc kubenswrapper[5043]: I1125 08:10:25.506331 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 08:10:25 crc kubenswrapper[5043]: I1125 08:10:25.557919 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 08:10:25 crc kubenswrapper[5043]: I1125 08:10:25.563504 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 08:10:25 crc kubenswrapper[5043]: I1125 08:10:25.836964 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 08:10:25 crc kubenswrapper[5043]: I1125 08:10:25.837000 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 08:10:25 crc kubenswrapper[5043]: I1125 08:10:25.920624 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 08:10:25 crc kubenswrapper[5043]: I1125 08:10:25.922874 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 08:10:25 crc kubenswrapper[5043]: I1125 08:10:25.971374 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 08:10:25 crc kubenswrapper[5043]: I1125 08:10:25.975085 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 08:10:26 crc kubenswrapper[5043]: I1125 08:10:26.846299 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 08:10:26 crc kubenswrapper[5043]: I1125 08:10:26.846946 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 08:10:28 crc kubenswrapper[5043]: I1125 08:10:28.864706 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-mrbzh" event={"ID":"8971f2a4-499c-4de4-b3f8-aebadd052ef7","Type":"ContainerStarted","Data":"4b6eb96901903644313e5644df2c78bdf81680fb3c9a0e1282481d35c1223d64"} Nov 25 08:10:28 crc kubenswrapper[5043]: I1125 08:10:28.883794 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-mrbzh" podStartSLOduration=2.450820882 podStartE2EDuration="8.883775648s" podCreationTimestamp="2025-11-25 08:10:20 +0000 UTC" firstStartedPulling="2025-11-25 08:10:21.605862562 +0000 UTC m=+3285.774058283" lastFinishedPulling="2025-11-25 08:10:28.038817328 +0000 UTC m=+3292.207013049" observedRunningTime="2025-11-25 08:10:28.877879378 +0000 UTC m=+3293.046075119" watchObservedRunningTime="2025-11-25 08:10:28.883775648 +0000 UTC m=+3293.051971369" Nov 25 08:10:30 crc kubenswrapper[5043]: I1125 08:10:30.983953 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 08:10:30 crc kubenswrapper[5043]: I1125 08:10:30.984382 5043 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 08:10:30 crc kubenswrapper[5043]: I1125 08:10:30.985846 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 08:10:30 crc kubenswrapper[5043]: I1125 08:10:30.994628 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 08:10:30 crc kubenswrapper[5043]: I1125 08:10:30.994744 5043 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 08:10:30 crc kubenswrapper[5043]: I1125 08:10:30.998139 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 08:11:11 crc kubenswrapper[5043]: I1125 08:11:11.314217 5043 generic.go:334] "Generic (PLEG): container finished" podID="8971f2a4-499c-4de4-b3f8-aebadd052ef7" containerID="4b6eb96901903644313e5644df2c78bdf81680fb3c9a0e1282481d35c1223d64" exitCode=0 Nov 25 08:11:11 crc kubenswrapper[5043]: I1125 08:11:11.314454 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-mrbzh" event={"ID":"8971f2a4-499c-4de4-b3f8-aebadd052ef7","Type":"ContainerDied","Data":"4b6eb96901903644313e5644df2c78bdf81680fb3c9a0e1282481d35c1223d64"} Nov 25 08:11:12 crc kubenswrapper[5043]: I1125 08:11:12.743688 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-mrbzh" Nov 25 08:11:12 crc kubenswrapper[5043]: I1125 08:11:12.870879 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd9sc\" (UniqueName: \"kubernetes.io/projected/8971f2a4-499c-4de4-b3f8-aebadd052ef7-kube-api-access-nd9sc\") pod \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\" (UID: \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\") " Nov 25 08:11:12 crc kubenswrapper[5043]: I1125 08:11:12.871003 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-combined-ca-bundle\") pod \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\" (UID: \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\") " Nov 25 08:11:12 crc kubenswrapper[5043]: I1125 08:11:12.871181 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-job-config-data\") pod \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\" (UID: \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\") " Nov 25 08:11:12 crc kubenswrapper[5043]: I1125 08:11:12.871270 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-config-data\") pod \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\" (UID: \"8971f2a4-499c-4de4-b3f8-aebadd052ef7\") " Nov 25 08:11:12 crc kubenswrapper[5043]: I1125 08:11:12.876537 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "8971f2a4-499c-4de4-b3f8-aebadd052ef7" (UID: "8971f2a4-499c-4de4-b3f8-aebadd052ef7"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:12 crc kubenswrapper[5043]: I1125 08:11:12.877459 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8971f2a4-499c-4de4-b3f8-aebadd052ef7-kube-api-access-nd9sc" (OuterVolumeSpecName: "kube-api-access-nd9sc") pod "8971f2a4-499c-4de4-b3f8-aebadd052ef7" (UID: "8971f2a4-499c-4de4-b3f8-aebadd052ef7"). InnerVolumeSpecName "kube-api-access-nd9sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:12 crc kubenswrapper[5043]: I1125 08:11:12.881313 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-config-data" (OuterVolumeSpecName: "config-data") pod "8971f2a4-499c-4de4-b3f8-aebadd052ef7" (UID: "8971f2a4-499c-4de4-b3f8-aebadd052ef7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:12 crc kubenswrapper[5043]: I1125 08:11:12.897741 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8971f2a4-499c-4de4-b3f8-aebadd052ef7" (UID: "8971f2a4-499c-4de4-b3f8-aebadd052ef7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:12 crc kubenswrapper[5043]: I1125 08:11:12.973865 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:12 crc kubenswrapper[5043]: I1125 08:11:12.973904 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd9sc\" (UniqueName: \"kubernetes.io/projected/8971f2a4-499c-4de4-b3f8-aebadd052ef7-kube-api-access-nd9sc\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:12 crc kubenswrapper[5043]: I1125 08:11:12.973918 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:12 crc kubenswrapper[5043]: I1125 08:11:12.973933 5043 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/8971f2a4-499c-4de4-b3f8-aebadd052ef7-job-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.340447 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-mrbzh" event={"ID":"8971f2a4-499c-4de4-b3f8-aebadd052ef7","Type":"ContainerDied","Data":"d353d7775f4e9314bed3f8f3aac27fe60699a09e530307fcf0fac5f18650bb76"} Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.340775 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d353d7775f4e9314bed3f8f3aac27fe60699a09e530307fcf0fac5f18650bb76" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.340844 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-mrbzh" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.699800 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 08:11:13 crc kubenswrapper[5043]: E1125 08:11:13.701407 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8971f2a4-499c-4de4-b3f8-aebadd052ef7" containerName="manila-db-sync" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.701453 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8971f2a4-499c-4de4-b3f8-aebadd052ef7" containerName="manila-db-sync" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.704062 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="8971f2a4-499c-4de4-b3f8-aebadd052ef7" containerName="manila-db-sync" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.713100 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.718214 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.719055 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.719326 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.723091 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-k7ztd" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.754269 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.766423 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.782036 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.782192 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.784157 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.800354 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78f48d6b7c-v7lfn"] Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.804095 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.808734 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78f48d6b7c-v7lfn"] Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.823284 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-scripts\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.823528 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c52938c6-c03b-4f39-b9af-4a638b0b2813-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.823659 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c684bb6-c341-4487-8db1-c9c00274909a-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.823747 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.823828 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d97d\" (UniqueName: \"kubernetes.io/projected/c52938c6-c03b-4f39-b9af-4a638b0b2813-kube-api-access-8d97d\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.823917 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr6fj\" (UniqueName: \"kubernetes.io/projected/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-kube-api-access-dr6fj\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.824002 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-scripts\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.824081 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-config-data\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.824149 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-dns-svc\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.824225 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w5v7\" (UniqueName: \"kubernetes.io/projected/0c684bb6-c341-4487-8db1-c9c00274909a-kube-api-access-9w5v7\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.824306 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.824389 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0c684bb6-c341-4487-8db1-c9c00274909a-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.824503 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-config-data\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.824579 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.824673 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-ovsdbserver-sb\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.824762 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.824849 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-ovsdbserver-nb\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.824925 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-openstack-edpm-ipam\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.824998 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0c684bb6-c341-4487-8db1-c9c00274909a-ceph\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.825072 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-config\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.926518 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-config-data\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.926975 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.927084 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-ovsdbserver-sb\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.927198 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.927324 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-ovsdbserver-nb\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.927420 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-openstack-edpm-ipam\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.927494 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0c684bb6-c341-4487-8db1-c9c00274909a-ceph\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.927566 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-config\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.927670 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-scripts\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.927760 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c52938c6-c03b-4f39-b9af-4a638b0b2813-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.927865 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c684bb6-c341-4487-8db1-c9c00274909a-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.927959 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.928080 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d97d\" (UniqueName: \"kubernetes.io/projected/c52938c6-c03b-4f39-b9af-4a638b0b2813-kube-api-access-8d97d\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.928202 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr6fj\" (UniqueName: \"kubernetes.io/projected/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-kube-api-access-dr6fj\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.928366 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-scripts\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.928487 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-config-data\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.928596 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-dns-svc\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.928732 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w5v7\" (UniqueName: \"kubernetes.io/projected/0c684bb6-c341-4487-8db1-c9c00274909a-kube-api-access-9w5v7\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.928816 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.928901 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0c684bb6-c341-4487-8db1-c9c00274909a-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.929339 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0c684bb6-c341-4487-8db1-c9c00274909a-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.930081 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c52938c6-c03b-4f39-b9af-4a638b0b2813-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.933210 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-dns-svc\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.935715 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-openstack-edpm-ipam\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.936232 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-ovsdbserver-sb\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.936950 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c684bb6-c341-4487-8db1-c9c00274909a-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.937987 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-ovsdbserver-nb\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.938451 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-config\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.947297 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-scripts\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.947656 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.948431 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-config-data\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.959450 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.960095 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.960370 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-config-data\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.960393 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0c684bb6-c341-4487-8db1-c9c00274909a-ceph\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.961136 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.961538 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-scripts\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.964283 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d97d\" (UniqueName: \"kubernetes.io/projected/c52938c6-c03b-4f39-b9af-4a638b0b2813-kube-api-access-8d97d\") pod \"manila-scheduler-0\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.971255 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w5v7\" (UniqueName: \"kubernetes.io/projected/0c684bb6-c341-4487-8db1-c9c00274909a-kube-api-access-9w5v7\") pod \"manila-share-share1-0\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:13 crc kubenswrapper[5043]: I1125 08:11:13.973569 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr6fj\" (UniqueName: \"kubernetes.io/projected/91febcbe-4fc7-4b44-b7e9-d5258e9216b5-kube-api-access-dr6fj\") pod \"dnsmasq-dns-78f48d6b7c-v7lfn\" (UID: \"91febcbe-4fc7-4b44-b7e9-d5258e9216b5\") " pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.058682 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.060909 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.069070 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.069769 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.079368 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.138240 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.140379 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.236042 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-config-data\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.236438 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-config-data-custom\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.236467 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5vgx\" (UniqueName: \"kubernetes.io/projected/bda225cb-1dfc-4e0f-92ad-65cba8533b16-kube-api-access-m5vgx\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.236696 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-scripts\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.236761 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bda225cb-1dfc-4e0f-92ad-65cba8533b16-etc-machine-id\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.236806 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bda225cb-1dfc-4e0f-92ad-65cba8533b16-logs\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.236879 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.338342 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-scripts\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.338407 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bda225cb-1dfc-4e0f-92ad-65cba8533b16-etc-machine-id\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.338442 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bda225cb-1dfc-4e0f-92ad-65cba8533b16-logs\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.338481 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.338505 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bda225cb-1dfc-4e0f-92ad-65cba8533b16-etc-machine-id\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.338520 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-config-data\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.338671 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-config-data-custom\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.338710 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5vgx\" (UniqueName: \"kubernetes.io/projected/bda225cb-1dfc-4e0f-92ad-65cba8533b16-kube-api-access-m5vgx\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.339586 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bda225cb-1dfc-4e0f-92ad-65cba8533b16-logs\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.345745 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-config-data-custom\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.347642 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.348045 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-scripts\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.352412 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-config-data\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.362953 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5vgx\" (UniqueName: \"kubernetes.io/projected/bda225cb-1dfc-4e0f-92ad-65cba8533b16-kube-api-access-m5vgx\") pod \"manila-api-0\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.390766 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.709475 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.825710 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78f48d6b7c-v7lfn"] Nov 25 08:11:14 crc kubenswrapper[5043]: W1125 08:11:14.831337 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91febcbe_4fc7_4b44_b7e9_d5258e9216b5.slice/crio-30ace6496ea099e7b9a98fb1fe77698f8b0516e77b70a0ab38bda7a693592d7c WatchSource:0}: Error finding container 30ace6496ea099e7b9a98fb1fe77698f8b0516e77b70a0ab38bda7a693592d7c: Status 404 returned error can't find the container with id 30ace6496ea099e7b9a98fb1fe77698f8b0516e77b70a0ab38bda7a693592d7c Nov 25 08:11:14 crc kubenswrapper[5043]: I1125 08:11:14.849007 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 08:11:14 crc kubenswrapper[5043]: W1125 08:11:14.853663 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c684bb6_c341_4487_8db1_c9c00274909a.slice/crio-1e4908c53087cb35d096948a49554766c2483a14a0e2b148aad7858f18db6bbb WatchSource:0}: Error finding container 1e4908c53087cb35d096948a49554766c2483a14a0e2b148aad7858f18db6bbb: Status 404 returned error can't find the container with id 1e4908c53087cb35d096948a49554766c2483a14a0e2b148aad7858f18db6bbb Nov 25 08:11:15 crc kubenswrapper[5043]: I1125 08:11:15.108657 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 25 08:11:15 crc kubenswrapper[5043]: I1125 08:11:15.377712 5043 generic.go:334] "Generic (PLEG): container finished" podID="91febcbe-4fc7-4b44-b7e9-d5258e9216b5" containerID="589e60c81ce87ae67ff0ef5ba70f0811eb3156db42266db1bf6fd0ee4634c8fb" exitCode=0 Nov 25 08:11:15 crc kubenswrapper[5043]: I1125 08:11:15.377781 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" event={"ID":"91febcbe-4fc7-4b44-b7e9-d5258e9216b5","Type":"ContainerDied","Data":"589e60c81ce87ae67ff0ef5ba70f0811eb3156db42266db1bf6fd0ee4634c8fb"} Nov 25 08:11:15 crc kubenswrapper[5043]: I1125 08:11:15.377810 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" event={"ID":"91febcbe-4fc7-4b44-b7e9-d5258e9216b5","Type":"ContainerStarted","Data":"30ace6496ea099e7b9a98fb1fe77698f8b0516e77b70a0ab38bda7a693592d7c"} Nov 25 08:11:15 crc kubenswrapper[5043]: I1125 08:11:15.380817 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"bda225cb-1dfc-4e0f-92ad-65cba8533b16","Type":"ContainerStarted","Data":"edd3d0fbb3dce8afa37f1099233da76b1ee4936790f341f2dd1f3cc898c59ca4"} Nov 25 08:11:15 crc kubenswrapper[5043]: I1125 08:11:15.387413 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"c52938c6-c03b-4f39-b9af-4a638b0b2813","Type":"ContainerStarted","Data":"90befedf95a305f26b8179f1dfe66d262f455a19e594cc39e818fb9c45561f73"} Nov 25 08:11:15 crc kubenswrapper[5043]: I1125 08:11:15.389925 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0c684bb6-c341-4487-8db1-c9c00274909a","Type":"ContainerStarted","Data":"1e4908c53087cb35d096948a49554766c2483a14a0e2b148aad7858f18db6bbb"} Nov 25 08:11:16 crc kubenswrapper[5043]: I1125 08:11:16.414544 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" event={"ID":"91febcbe-4fc7-4b44-b7e9-d5258e9216b5","Type":"ContainerStarted","Data":"e71c068f730b019ba7d27ab394dfa217236d86433d91214a6528833d11e2b884"} Nov 25 08:11:16 crc kubenswrapper[5043]: I1125 08:11:16.415088 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:16 crc kubenswrapper[5043]: I1125 08:11:16.420441 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"bda225cb-1dfc-4e0f-92ad-65cba8533b16","Type":"ContainerStarted","Data":"28358f81e97bc03282c1300fb3fc3f56af1e4985dbf823ececc43caf1293979b"} Nov 25 08:11:16 crc kubenswrapper[5043]: I1125 08:11:16.451697 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" podStartSLOduration=3.4516765830000002 podStartE2EDuration="3.451676583s" podCreationTimestamp="2025-11-25 08:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:11:16.450343177 +0000 UTC m=+3340.618538908" watchObservedRunningTime="2025-11-25 08:11:16.451676583 +0000 UTC m=+3340.619872304" Nov 25 08:11:16 crc kubenswrapper[5043]: I1125 08:11:16.751960 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Nov 25 08:11:17 crc kubenswrapper[5043]: I1125 08:11:17.438075 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"c52938c6-c03b-4f39-b9af-4a638b0b2813","Type":"ContainerStarted","Data":"c7446a77655173882fd54e774c4e5ab17155df31778535e1e9c0d9e64cb18cde"} Nov 25 08:11:17 crc kubenswrapper[5043]: I1125 08:11:17.443546 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"bda225cb-1dfc-4e0f-92ad-65cba8533b16","Type":"ContainerStarted","Data":"9ac7dc86885595333efe2f13949d622bac54d6611d8c279edeeaeb4107a4db72"} Nov 25 08:11:17 crc kubenswrapper[5043]: I1125 08:11:17.443857 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="bda225cb-1dfc-4e0f-92ad-65cba8533b16" containerName="manila-api-log" containerID="cri-o://28358f81e97bc03282c1300fb3fc3f56af1e4985dbf823ececc43caf1293979b" gracePeriod=30 Nov 25 08:11:17 crc kubenswrapper[5043]: I1125 08:11:17.443883 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="bda225cb-1dfc-4e0f-92ad-65cba8533b16" containerName="manila-api" containerID="cri-o://9ac7dc86885595333efe2f13949d622bac54d6611d8c279edeeaeb4107a4db72" gracePeriod=30 Nov 25 08:11:17 crc kubenswrapper[5043]: I1125 08:11:17.470397 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.470366293 podStartE2EDuration="4.470366293s" podCreationTimestamp="2025-11-25 08:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:11:17.463783864 +0000 UTC m=+3341.631979585" watchObservedRunningTime="2025-11-25 08:11:17.470366293 +0000 UTC m=+3341.638562014" Nov 25 08:11:18 crc kubenswrapper[5043]: I1125 08:11:18.465627 5043 generic.go:334] "Generic (PLEG): container finished" podID="bda225cb-1dfc-4e0f-92ad-65cba8533b16" containerID="9ac7dc86885595333efe2f13949d622bac54d6611d8c279edeeaeb4107a4db72" exitCode=0 Nov 25 08:11:18 crc kubenswrapper[5043]: I1125 08:11:18.467095 5043 generic.go:334] "Generic (PLEG): container finished" podID="bda225cb-1dfc-4e0f-92ad-65cba8533b16" containerID="28358f81e97bc03282c1300fb3fc3f56af1e4985dbf823ececc43caf1293979b" exitCode=143 Nov 25 08:11:18 crc kubenswrapper[5043]: I1125 08:11:18.467380 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"bda225cb-1dfc-4e0f-92ad-65cba8533b16","Type":"ContainerDied","Data":"9ac7dc86885595333efe2f13949d622bac54d6611d8c279edeeaeb4107a4db72"} Nov 25 08:11:18 crc kubenswrapper[5043]: I1125 08:11:18.467532 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"bda225cb-1dfc-4e0f-92ad-65cba8533b16","Type":"ContainerDied","Data":"28358f81e97bc03282c1300fb3fc3f56af1e4985dbf823ececc43caf1293979b"} Nov 25 08:11:18 crc kubenswrapper[5043]: I1125 08:11:18.488018 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"c52938c6-c03b-4f39-b9af-4a638b0b2813","Type":"ContainerStarted","Data":"829a4dc0ff61677d6ce3631629de18e4599cbf44bf2b44266505883e56349d70"} Nov 25 08:11:18 crc kubenswrapper[5043]: I1125 08:11:18.516310 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.8652277269999997 podStartE2EDuration="5.516025214s" podCreationTimestamp="2025-11-25 08:11:13 +0000 UTC" firstStartedPulling="2025-11-25 08:11:14.728759641 +0000 UTC m=+3338.896955362" lastFinishedPulling="2025-11-25 08:11:16.379557128 +0000 UTC m=+3340.547752849" observedRunningTime="2025-11-25 08:11:18.510445563 +0000 UTC m=+3342.678641304" watchObservedRunningTime="2025-11-25 08:11:18.516025214 +0000 UTC m=+3342.684220935" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.020405 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.155944 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-config-data\") pod \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.156074 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-combined-ca-bundle\") pod \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.156133 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-config-data-custom\") pod \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.156153 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-scripts\") pod \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.156190 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bda225cb-1dfc-4e0f-92ad-65cba8533b16-etc-machine-id\") pod \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.156246 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bda225cb-1dfc-4e0f-92ad-65cba8533b16-logs\") pod \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.156279 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5vgx\" (UniqueName: \"kubernetes.io/projected/bda225cb-1dfc-4e0f-92ad-65cba8533b16-kube-api-access-m5vgx\") pod \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\" (UID: \"bda225cb-1dfc-4e0f-92ad-65cba8533b16\") " Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.159533 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bda225cb-1dfc-4e0f-92ad-65cba8533b16-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bda225cb-1dfc-4e0f-92ad-65cba8533b16" (UID: "bda225cb-1dfc-4e0f-92ad-65cba8533b16"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.159589 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bda225cb-1dfc-4e0f-92ad-65cba8533b16-logs" (OuterVolumeSpecName: "logs") pod "bda225cb-1dfc-4e0f-92ad-65cba8533b16" (UID: "bda225cb-1dfc-4e0f-92ad-65cba8533b16"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.164174 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bda225cb-1dfc-4e0f-92ad-65cba8533b16-kube-api-access-m5vgx" (OuterVolumeSpecName: "kube-api-access-m5vgx") pod "bda225cb-1dfc-4e0f-92ad-65cba8533b16" (UID: "bda225cb-1dfc-4e0f-92ad-65cba8533b16"). InnerVolumeSpecName "kube-api-access-m5vgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.184263 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-scripts" (OuterVolumeSpecName: "scripts") pod "bda225cb-1dfc-4e0f-92ad-65cba8533b16" (UID: "bda225cb-1dfc-4e0f-92ad-65cba8533b16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.192705 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bda225cb-1dfc-4e0f-92ad-65cba8533b16" (UID: "bda225cb-1dfc-4e0f-92ad-65cba8533b16"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.204034 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bda225cb-1dfc-4e0f-92ad-65cba8533b16" (UID: "bda225cb-1dfc-4e0f-92ad-65cba8533b16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.259077 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.259296 5043 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.259402 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.259488 5043 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bda225cb-1dfc-4e0f-92ad-65cba8533b16-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.259570 5043 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bda225cb-1dfc-4e0f-92ad-65cba8533b16-logs\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.259660 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5vgx\" (UniqueName: \"kubernetes.io/projected/bda225cb-1dfc-4e0f-92ad-65cba8533b16-kube-api-access-m5vgx\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.273102 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-config-data" (OuterVolumeSpecName: "config-data") pod "bda225cb-1dfc-4e0f-92ad-65cba8533b16" (UID: "bda225cb-1dfc-4e0f-92ad-65cba8533b16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.361713 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bda225cb-1dfc-4e0f-92ad-65cba8533b16-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.500804 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.500860 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"bda225cb-1dfc-4e0f-92ad-65cba8533b16","Type":"ContainerDied","Data":"edd3d0fbb3dce8afa37f1099233da76b1ee4936790f341f2dd1f3cc898c59ca4"} Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.500906 5043 scope.go:117] "RemoveContainer" containerID="9ac7dc86885595333efe2f13949d622bac54d6611d8c279edeeaeb4107a4db72" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.539152 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.539832 5043 scope.go:117] "RemoveContainer" containerID="28358f81e97bc03282c1300fb3fc3f56af1e4985dbf823ececc43caf1293979b" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.562636 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.570657 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 25 08:11:19 crc kubenswrapper[5043]: E1125 08:11:19.571076 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda225cb-1dfc-4e0f-92ad-65cba8533b16" containerName="manila-api-log" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.571095 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda225cb-1dfc-4e0f-92ad-65cba8533b16" containerName="manila-api-log" Nov 25 08:11:19 crc kubenswrapper[5043]: E1125 08:11:19.571109 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda225cb-1dfc-4e0f-92ad-65cba8533b16" containerName="manila-api" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.571115 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda225cb-1dfc-4e0f-92ad-65cba8533b16" containerName="manila-api" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.571331 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="bda225cb-1dfc-4e0f-92ad-65cba8533b16" containerName="manila-api-log" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.571579 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="bda225cb-1dfc-4e0f-92ad-65cba8533b16" containerName="manila-api" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.573064 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.578971 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.580281 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.580795 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.580910 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.669585 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-scripts\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.669672 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.669748 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-public-tls-certs\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.669795 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48plm\" (UniqueName: \"kubernetes.io/projected/40b1194f-e610-4ee5-a970-281ea03cde81-kube-api-access-48plm\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.670021 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40b1194f-e610-4ee5-a970-281ea03cde81-etc-machine-id\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.670250 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40b1194f-e610-4ee5-a970-281ea03cde81-logs\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.670370 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-config-data-custom\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.670399 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-config-data\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.670514 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-internal-tls-certs\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.772884 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-scripts\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.773254 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.773290 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-public-tls-certs\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.773314 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48plm\" (UniqueName: \"kubernetes.io/projected/40b1194f-e610-4ee5-a970-281ea03cde81-kube-api-access-48plm\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.773946 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40b1194f-e610-4ee5-a970-281ea03cde81-etc-machine-id\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.774032 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40b1194f-e610-4ee5-a970-281ea03cde81-etc-machine-id\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.774503 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40b1194f-e610-4ee5-a970-281ea03cde81-logs\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.774572 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40b1194f-e610-4ee5-a970-281ea03cde81-logs\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.774693 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-config-data-custom\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.774719 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-config-data\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.778340 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-internal-tls-certs\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.779368 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-config-data-custom\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.779368 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.779546 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-public-tls-certs\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.779950 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-scripts\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.781039 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-config-data\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.782470 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40b1194f-e610-4ee5-a970-281ea03cde81-internal-tls-certs\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.792643 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48plm\" (UniqueName: \"kubernetes.io/projected/40b1194f-e610-4ee5-a970-281ea03cde81-kube-api-access-48plm\") pod \"manila-api-0\" (UID: \"40b1194f-e610-4ee5-a970-281ea03cde81\") " pod="openstack/manila-api-0" Nov 25 08:11:19 crc kubenswrapper[5043]: I1125 08:11:19.896072 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 25 08:11:20 crc kubenswrapper[5043]: I1125 08:11:20.288756 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:11:20 crc kubenswrapper[5043]: I1125 08:11:20.289327 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08930914-949f-403c-800a-88f0cda8fbd4" containerName="ceilometer-central-agent" containerID="cri-o://c7c53b92abb24b746a4764737cd42c07529490a9ecb8d26daab5c5c2d8a2a9dd" gracePeriod=30 Nov 25 08:11:20 crc kubenswrapper[5043]: I1125 08:11:20.289840 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08930914-949f-403c-800a-88f0cda8fbd4" containerName="proxy-httpd" containerID="cri-o://254b1c90ca8a6ffad4232f3f172e63847e0aa419554d01103332693d71a3b059" gracePeriod=30 Nov 25 08:11:20 crc kubenswrapper[5043]: I1125 08:11:20.289917 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08930914-949f-403c-800a-88f0cda8fbd4" containerName="sg-core" containerID="cri-o://74fd965156c8c4f06579aeed4b010088dc6cfc2d206aaab314ce30a99a60c818" gracePeriod=30 Nov 25 08:11:20 crc kubenswrapper[5043]: I1125 08:11:20.289965 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08930914-949f-403c-800a-88f0cda8fbd4" containerName="ceilometer-notification-agent" containerID="cri-o://01dd3438fa449687359359e2d37728d77ef801d1f9dcdc9aeed28fe86dc69936" gracePeriod=30 Nov 25 08:11:20 crc kubenswrapper[5043]: I1125 08:11:20.520902 5043 generic.go:334] "Generic (PLEG): container finished" podID="08930914-949f-403c-800a-88f0cda8fbd4" containerID="74fd965156c8c4f06579aeed4b010088dc6cfc2d206aaab314ce30a99a60c818" exitCode=2 Nov 25 08:11:20 crc kubenswrapper[5043]: I1125 08:11:20.520991 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08930914-949f-403c-800a-88f0cda8fbd4","Type":"ContainerDied","Data":"74fd965156c8c4f06579aeed4b010088dc6cfc2d206aaab314ce30a99a60c818"} Nov 25 08:11:20 crc kubenswrapper[5043]: I1125 08:11:20.548680 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 25 08:11:20 crc kubenswrapper[5043]: I1125 08:11:20.975559 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bda225cb-1dfc-4e0f-92ad-65cba8533b16" path="/var/lib/kubelet/pods/bda225cb-1dfc-4e0f-92ad-65cba8533b16/volumes" Nov 25 08:11:21 crc kubenswrapper[5043]: I1125 08:11:21.533301 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"40b1194f-e610-4ee5-a970-281ea03cde81","Type":"ContainerStarted","Data":"2b999471a3ee163ee429bbb6809c915057861cf65b554cd75fed7c4044badd7b"} Nov 25 08:11:21 crc kubenswrapper[5043]: I1125 08:11:21.533683 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"40b1194f-e610-4ee5-a970-281ea03cde81","Type":"ContainerStarted","Data":"26ad6a55b6192c959d2c879c030a2630eaf15f4199c0eb9dd6099e0766c3ea35"} Nov 25 08:11:21 crc kubenswrapper[5043]: I1125 08:11:21.537534 5043 generic.go:334] "Generic (PLEG): container finished" podID="08930914-949f-403c-800a-88f0cda8fbd4" containerID="254b1c90ca8a6ffad4232f3f172e63847e0aa419554d01103332693d71a3b059" exitCode=0 Nov 25 08:11:21 crc kubenswrapper[5043]: I1125 08:11:21.537571 5043 generic.go:334] "Generic (PLEG): container finished" podID="08930914-949f-403c-800a-88f0cda8fbd4" containerID="c7c53b92abb24b746a4764737cd42c07529490a9ecb8d26daab5c5c2d8a2a9dd" exitCode=0 Nov 25 08:11:21 crc kubenswrapper[5043]: I1125 08:11:21.537596 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08930914-949f-403c-800a-88f0cda8fbd4","Type":"ContainerDied","Data":"254b1c90ca8a6ffad4232f3f172e63847e0aa419554d01103332693d71a3b059"} Nov 25 08:11:21 crc kubenswrapper[5043]: I1125 08:11:21.537710 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08930914-949f-403c-800a-88f0cda8fbd4","Type":"ContainerDied","Data":"c7c53b92abb24b746a4764737cd42c07529490a9ecb8d26daab5c5c2d8a2a9dd"} Nov 25 08:11:22 crc kubenswrapper[5043]: I1125 08:11:22.557989 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"40b1194f-e610-4ee5-a970-281ea03cde81","Type":"ContainerStarted","Data":"179d72732957b5db9645b49fd43cd7a07742aa80570b2e04b2aefa47e0c1cd0d"} Nov 25 08:11:22 crc kubenswrapper[5043]: I1125 08:11:22.559558 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 25 08:11:22 crc kubenswrapper[5043]: I1125 08:11:22.599829 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.599798007 podStartE2EDuration="3.599798007s" podCreationTimestamp="2025-11-25 08:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:11:22.591130342 +0000 UTC m=+3346.759326073" watchObservedRunningTime="2025-11-25 08:11:22.599798007 +0000 UTC m=+3346.767993728" Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.139554 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.142804 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78f48d6b7c-v7lfn" Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.218031 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c58867b6c-r79jq"] Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.221286 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c58867b6c-r79jq" podUID="db606dbd-d686-4cd3-bf58-84f1199a3c36" containerName="dnsmasq-dns" containerID="cri-o://e235a5221ce9a5313e55416824306364aacd6c9bd40454557ff02f1a4eb9eec3" gracePeriod=10 Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.589319 5043 generic.go:334] "Generic (PLEG): container finished" podID="08930914-949f-403c-800a-88f0cda8fbd4" containerID="01dd3438fa449687359359e2d37728d77ef801d1f9dcdc9aeed28fe86dc69936" exitCode=0 Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.589388 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08930914-949f-403c-800a-88f0cda8fbd4","Type":"ContainerDied","Data":"01dd3438fa449687359359e2d37728d77ef801d1f9dcdc9aeed28fe86dc69936"} Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.591941 5043 generic.go:334] "Generic (PLEG): container finished" podID="db606dbd-d686-4cd3-bf58-84f1199a3c36" containerID="e235a5221ce9a5313e55416824306364aacd6c9bd40454557ff02f1a4eb9eec3" exitCode=0 Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.592293 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c58867b6c-r79jq" event={"ID":"db606dbd-d686-4cd3-bf58-84f1199a3c36","Type":"ContainerDied","Data":"e235a5221ce9a5313e55416824306364aacd6c9bd40454557ff02f1a4eb9eec3"} Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.832832 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.839526 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.903978 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-combined-ca-bundle\") pod \"08930914-949f-403c-800a-88f0cda8fbd4\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.904090 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-config-data\") pod \"08930914-949f-403c-800a-88f0cda8fbd4\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.904122 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08930914-949f-403c-800a-88f0cda8fbd4-run-httpd\") pod \"08930914-949f-403c-800a-88f0cda8fbd4\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.904165 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-456zd\" (UniqueName: \"kubernetes.io/projected/08930914-949f-403c-800a-88f0cda8fbd4-kube-api-access-456zd\") pod \"08930914-949f-403c-800a-88f0cda8fbd4\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.904211 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkfj7\" (UniqueName: \"kubernetes.io/projected/db606dbd-d686-4cd3-bf58-84f1199a3c36-kube-api-access-xkfj7\") pod \"db606dbd-d686-4cd3-bf58-84f1199a3c36\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.904259 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-config\") pod \"db606dbd-d686-4cd3-bf58-84f1199a3c36\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.904293 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-ovsdbserver-nb\") pod \"db606dbd-d686-4cd3-bf58-84f1199a3c36\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.904326 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-ovsdbserver-sb\") pod \"db606dbd-d686-4cd3-bf58-84f1199a3c36\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.904367 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-ceilometer-tls-certs\") pod \"08930914-949f-403c-800a-88f0cda8fbd4\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.904447 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-scripts\") pod \"08930914-949f-403c-800a-88f0cda8fbd4\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.904493 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-sg-core-conf-yaml\") pod \"08930914-949f-403c-800a-88f0cda8fbd4\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.904559 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-dns-svc\") pod \"db606dbd-d686-4cd3-bf58-84f1199a3c36\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.904590 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08930914-949f-403c-800a-88f0cda8fbd4-log-httpd\") pod \"08930914-949f-403c-800a-88f0cda8fbd4\" (UID: \"08930914-949f-403c-800a-88f0cda8fbd4\") " Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.904753 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-openstack-edpm-ipam\") pod \"db606dbd-d686-4cd3-bf58-84f1199a3c36\" (UID: \"db606dbd-d686-4cd3-bf58-84f1199a3c36\") " Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.907357 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08930914-949f-403c-800a-88f0cda8fbd4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "08930914-949f-403c-800a-88f0cda8fbd4" (UID: "08930914-949f-403c-800a-88f0cda8fbd4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.909569 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-scripts" (OuterVolumeSpecName: "scripts") pod "08930914-949f-403c-800a-88f0cda8fbd4" (UID: "08930914-949f-403c-800a-88f0cda8fbd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.911873 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08930914-949f-403c-800a-88f0cda8fbd4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "08930914-949f-403c-800a-88f0cda8fbd4" (UID: "08930914-949f-403c-800a-88f0cda8fbd4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.918768 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08930914-949f-403c-800a-88f0cda8fbd4-kube-api-access-456zd" (OuterVolumeSpecName: "kube-api-access-456zd") pod "08930914-949f-403c-800a-88f0cda8fbd4" (UID: "08930914-949f-403c-800a-88f0cda8fbd4"). InnerVolumeSpecName "kube-api-access-456zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.920467 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db606dbd-d686-4cd3-bf58-84f1199a3c36-kube-api-access-xkfj7" (OuterVolumeSpecName: "kube-api-access-xkfj7") pod "db606dbd-d686-4cd3-bf58-84f1199a3c36" (UID: "db606dbd-d686-4cd3-bf58-84f1199a3c36"). InnerVolumeSpecName "kube-api-access-xkfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.948808 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "08930914-949f-403c-800a-88f0cda8fbd4" (UID: "08930914-949f-403c-800a-88f0cda8fbd4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.993683 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "db606dbd-d686-4cd3-bf58-84f1199a3c36" (UID: "db606dbd-d686-4cd3-bf58-84f1199a3c36"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.994664 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "08930914-949f-403c-800a-88f0cda8fbd4" (UID: "08930914-949f-403c-800a-88f0cda8fbd4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:24 crc kubenswrapper[5043]: I1125 08:11:24.994967 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "db606dbd-d686-4cd3-bf58-84f1199a3c36" (UID: "db606dbd-d686-4cd3-bf58-84f1199a3c36"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.007525 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.007556 5043 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.007569 5043 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08930914-949f-403c-800a-88f0cda8fbd4-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.007579 5043 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.007588 5043 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08930914-949f-403c-800a-88f0cda8fbd4-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.007597 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-456zd\" (UniqueName: \"kubernetes.io/projected/08930914-949f-403c-800a-88f0cda8fbd4-kube-api-access-456zd\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.007624 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkfj7\" (UniqueName: \"kubernetes.io/projected/db606dbd-d686-4cd3-bf58-84f1199a3c36-kube-api-access-xkfj7\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.007592 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-config" (OuterVolumeSpecName: "config") pod "db606dbd-d686-4cd3-bf58-84f1199a3c36" (UID: "db606dbd-d686-4cd3-bf58-84f1199a3c36"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.007634 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.007708 5043 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.009148 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "db606dbd-d686-4cd3-bf58-84f1199a3c36" (UID: "db606dbd-d686-4cd3-bf58-84f1199a3c36"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.027521 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "db606dbd-d686-4cd3-bf58-84f1199a3c36" (UID: "db606dbd-d686-4cd3-bf58-84f1199a3c36"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.071771 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08930914-949f-403c-800a-88f0cda8fbd4" (UID: "08930914-949f-403c-800a-88f0cda8fbd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.102538 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-config-data" (OuterVolumeSpecName: "config-data") pod "08930914-949f-403c-800a-88f0cda8fbd4" (UID: "08930914-949f-403c-800a-88f0cda8fbd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.110279 5043 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.110314 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.110327 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08930914-949f-403c-800a-88f0cda8fbd4-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.110337 5043 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-config\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.110345 5043 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db606dbd-d686-4cd3-bf58-84f1199a3c36-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.603024 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c58867b6c-r79jq" event={"ID":"db606dbd-d686-4cd3-bf58-84f1199a3c36","Type":"ContainerDied","Data":"82f547208b36028238c07c41527f3af6f2ad9f05db3a180bad4c11a22da6c9e7"} Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.603783 5043 scope.go:117] "RemoveContainer" containerID="e235a5221ce9a5313e55416824306364aacd6c9bd40454557ff02f1a4eb9eec3" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.603338 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c58867b6c-r79jq" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.616755 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0c684bb6-c341-4487-8db1-c9c00274909a","Type":"ContainerStarted","Data":"f36e29b36f92d77709471e7301bfa26fe220283370125f757d75096361ace289"} Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.621434 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08930914-949f-403c-800a-88f0cda8fbd4","Type":"ContainerDied","Data":"609b9ded574cc611428bd67fb02b17c90f7e513c08292239f499433887f6a940"} Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.621802 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.645646 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c58867b6c-r79jq"] Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.656733 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c58867b6c-r79jq"] Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.665567 5043 scope.go:117] "RemoveContainer" containerID="e46c5e0dde85823dec7b6e2a091512a70e9e758a94ad6b0c10ba2f3eca2a13a2" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.672596 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.751909 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.761157 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:11:25 crc kubenswrapper[5043]: E1125 08:11:25.761548 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db606dbd-d686-4cd3-bf58-84f1199a3c36" containerName="init" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.761565 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="db606dbd-d686-4cd3-bf58-84f1199a3c36" containerName="init" Nov 25 08:11:25 crc kubenswrapper[5043]: E1125 08:11:25.761589 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08930914-949f-403c-800a-88f0cda8fbd4" containerName="proxy-httpd" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.761599 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="08930914-949f-403c-800a-88f0cda8fbd4" containerName="proxy-httpd" Nov 25 08:11:25 crc kubenswrapper[5043]: E1125 08:11:25.761627 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db606dbd-d686-4cd3-bf58-84f1199a3c36" containerName="dnsmasq-dns" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.761635 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="db606dbd-d686-4cd3-bf58-84f1199a3c36" containerName="dnsmasq-dns" Nov 25 08:11:25 crc kubenswrapper[5043]: E1125 08:11:25.761648 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08930914-949f-403c-800a-88f0cda8fbd4" containerName="ceilometer-central-agent" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.761655 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="08930914-949f-403c-800a-88f0cda8fbd4" containerName="ceilometer-central-agent" Nov 25 08:11:25 crc kubenswrapper[5043]: E1125 08:11:25.761675 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08930914-949f-403c-800a-88f0cda8fbd4" containerName="ceilometer-notification-agent" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.761682 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="08930914-949f-403c-800a-88f0cda8fbd4" containerName="ceilometer-notification-agent" Nov 25 08:11:25 crc kubenswrapper[5043]: E1125 08:11:25.761706 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08930914-949f-403c-800a-88f0cda8fbd4" containerName="sg-core" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.761735 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="08930914-949f-403c-800a-88f0cda8fbd4" containerName="sg-core" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.762088 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="08930914-949f-403c-800a-88f0cda8fbd4" containerName="sg-core" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.762109 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="08930914-949f-403c-800a-88f0cda8fbd4" containerName="proxy-httpd" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.762125 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="db606dbd-d686-4cd3-bf58-84f1199a3c36" containerName="dnsmasq-dns" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.762137 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="08930914-949f-403c-800a-88f0cda8fbd4" containerName="ceilometer-central-agent" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.762146 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="08930914-949f-403c-800a-88f0cda8fbd4" containerName="ceilometer-notification-agent" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.764143 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.767700 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.767944 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.768077 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.770107 5043 scope.go:117] "RemoveContainer" containerID="254b1c90ca8a6ffad4232f3f172e63847e0aa419554d01103332693d71a3b059" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.787648 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.824505 5043 scope.go:117] "RemoveContainer" containerID="74fd965156c8c4f06579aeed4b010088dc6cfc2d206aaab314ce30a99a60c818" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.836316 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.836654 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-config-data\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.836785 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.836946 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-run-httpd\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.837050 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-log-httpd\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.837154 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n92w\" (UniqueName: \"kubernetes.io/projected/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-kube-api-access-7n92w\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.837253 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.837353 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-scripts\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.853033 5043 scope.go:117] "RemoveContainer" containerID="01dd3438fa449687359359e2d37728d77ef801d1f9dcdc9aeed28fe86dc69936" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.889884 5043 scope.go:117] "RemoveContainer" containerID="c7c53b92abb24b746a4764737cd42c07529490a9ecb8d26daab5c5c2d8a2a9dd" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.943907 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.943986 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-run-httpd\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.944030 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-log-httpd\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.944074 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n92w\" (UniqueName: \"kubernetes.io/projected/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-kube-api-access-7n92w\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.944108 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.944132 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-scripts\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.944389 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.944527 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-config-data\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.945554 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-run-httpd\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.945560 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-log-httpd\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.949893 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.950449 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-config-data\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.950579 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.951090 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.952508 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-scripts\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:25 crc kubenswrapper[5043]: I1125 08:11:25.971523 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n92w\" (UniqueName: \"kubernetes.io/projected/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-kube-api-access-7n92w\") pod \"ceilometer-0\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " pod="openstack/ceilometer-0" Nov 25 08:11:26 crc kubenswrapper[5043]: I1125 08:11:26.127503 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:11:26 crc kubenswrapper[5043]: I1125 08:11:26.604849 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:11:26 crc kubenswrapper[5043]: I1125 08:11:26.654497 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0c684bb6-c341-4487-8db1-c9c00274909a","Type":"ContainerStarted","Data":"b92e0091c393164b9e48e50d8ba9eef49424dccaf4f1e713353461c1409923b8"} Nov 25 08:11:26 crc kubenswrapper[5043]: I1125 08:11:26.660810 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d850aeb7-bd9c-41f0-91ee-b8afa951d00e","Type":"ContainerStarted","Data":"7997e170044bdb5e15473105364b33e6de2c67c59bb9c9658ed04a2a8ffed264"} Nov 25 08:11:26 crc kubenswrapper[5043]: I1125 08:11:26.679293 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.693706638 podStartE2EDuration="13.679272684s" podCreationTimestamp="2025-11-25 08:11:13 +0000 UTC" firstStartedPulling="2025-11-25 08:11:14.855914729 +0000 UTC m=+3339.024110450" lastFinishedPulling="2025-11-25 08:11:24.841480775 +0000 UTC m=+3349.009676496" observedRunningTime="2025-11-25 08:11:26.673302062 +0000 UTC m=+3350.841497783" watchObservedRunningTime="2025-11-25 08:11:26.679272684 +0000 UTC m=+3350.847468405" Nov 25 08:11:26 crc kubenswrapper[5043]: I1125 08:11:26.976341 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08930914-949f-403c-800a-88f0cda8fbd4" path="/var/lib/kubelet/pods/08930914-949f-403c-800a-88f0cda8fbd4/volumes" Nov 25 08:11:26 crc kubenswrapper[5043]: I1125 08:11:26.978175 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db606dbd-d686-4cd3-bf58-84f1199a3c36" path="/var/lib/kubelet/pods/db606dbd-d686-4cd3-bf58-84f1199a3c36/volumes" Nov 25 08:11:27 crc kubenswrapper[5043]: I1125 08:11:27.078221 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:11:29 crc kubenswrapper[5043]: I1125 08:11:29.691934 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d850aeb7-bd9c-41f0-91ee-b8afa951d00e","Type":"ContainerStarted","Data":"f7f45f9526e79dfda7a817643fdc365a9cce0fe81e91cf3a271e9f109c83d1e5"} Nov 25 08:11:34 crc kubenswrapper[5043]: I1125 08:11:34.070242 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 08:11:35 crc kubenswrapper[5043]: I1125 08:11:35.162077 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6fc62"] Nov 25 08:11:35 crc kubenswrapper[5043]: I1125 08:11:35.164523 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fc62" Nov 25 08:11:35 crc kubenswrapper[5043]: I1125 08:11:35.192728 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6fc62"] Nov 25 08:11:35 crc kubenswrapper[5043]: I1125 08:11:35.231315 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhvn9\" (UniqueName: \"kubernetes.io/projected/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-kube-api-access-bhvn9\") pod \"redhat-operators-6fc62\" (UID: \"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a\") " pod="openshift-marketplace/redhat-operators-6fc62" Nov 25 08:11:35 crc kubenswrapper[5043]: I1125 08:11:35.231502 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-catalog-content\") pod \"redhat-operators-6fc62\" (UID: \"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a\") " pod="openshift-marketplace/redhat-operators-6fc62" Nov 25 08:11:35 crc kubenswrapper[5043]: I1125 08:11:35.231536 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-utilities\") pod \"redhat-operators-6fc62\" (UID: \"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a\") " pod="openshift-marketplace/redhat-operators-6fc62" Nov 25 08:11:35 crc kubenswrapper[5043]: I1125 08:11:35.333686 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-utilities\") pod \"redhat-operators-6fc62\" (UID: \"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a\") " pod="openshift-marketplace/redhat-operators-6fc62" Nov 25 08:11:35 crc kubenswrapper[5043]: I1125 08:11:35.333762 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhvn9\" (UniqueName: \"kubernetes.io/projected/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-kube-api-access-bhvn9\") pod \"redhat-operators-6fc62\" (UID: \"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a\") " pod="openshift-marketplace/redhat-operators-6fc62" Nov 25 08:11:35 crc kubenswrapper[5043]: I1125 08:11:35.334322 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-utilities\") pod \"redhat-operators-6fc62\" (UID: \"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a\") " pod="openshift-marketplace/redhat-operators-6fc62" Nov 25 08:11:35 crc kubenswrapper[5043]: I1125 08:11:35.334347 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-catalog-content\") pod \"redhat-operators-6fc62\" (UID: \"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a\") " pod="openshift-marketplace/redhat-operators-6fc62" Nov 25 08:11:35 crc kubenswrapper[5043]: I1125 08:11:35.334716 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-catalog-content\") pod \"redhat-operators-6fc62\" (UID: \"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a\") " pod="openshift-marketplace/redhat-operators-6fc62" Nov 25 08:11:35 crc kubenswrapper[5043]: I1125 08:11:35.358917 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhvn9\" (UniqueName: \"kubernetes.io/projected/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-kube-api-access-bhvn9\") pod \"redhat-operators-6fc62\" (UID: \"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a\") " pod="openshift-marketplace/redhat-operators-6fc62" Nov 25 08:11:35 crc kubenswrapper[5043]: I1125 08:11:35.486335 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fc62" Nov 25 08:11:35 crc kubenswrapper[5043]: I1125 08:11:35.756536 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d850aeb7-bd9c-41f0-91ee-b8afa951d00e","Type":"ContainerStarted","Data":"a6b725bf17738523435848bcafad91fd22d3fc4d5308c7119232ebb006754ec6"} Nov 25 08:11:35 crc kubenswrapper[5043]: I1125 08:11:35.841802 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 25 08:11:35 crc kubenswrapper[5043]: I1125 08:11:35.892114 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 08:11:35 crc kubenswrapper[5043]: I1125 08:11:35.970415 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6fc62"] Nov 25 08:11:35 crc kubenswrapper[5043]: W1125 08:11:35.986426 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f7701e2_2bae_49d0_8cd9_cb319e7e0e3a.slice/crio-d8f1f342204e963429651eff2fc98692417c3a3e1854a1c75b376292b73f1064 WatchSource:0}: Error finding container d8f1f342204e963429651eff2fc98692417c3a3e1854a1c75b376292b73f1064: Status 404 returned error can't find the container with id d8f1f342204e963429651eff2fc98692417c3a3e1854a1c75b376292b73f1064 Nov 25 08:11:36 crc kubenswrapper[5043]: I1125 08:11:36.780668 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="c52938c6-c03b-4f39-b9af-4a638b0b2813" containerName="manila-scheduler" containerID="cri-o://c7446a77655173882fd54e774c4e5ab17155df31778535e1e9c0d9e64cb18cde" gracePeriod=30 Nov 25 08:11:36 crc kubenswrapper[5043]: I1125 08:11:36.780855 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fc62" event={"ID":"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a","Type":"ContainerStarted","Data":"4617dea3d893382e60d87f26af135fc92cce510d637c65acd3fb14b0e75afe7b"} Nov 25 08:11:36 crc kubenswrapper[5043]: I1125 08:11:36.783202 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fc62" event={"ID":"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a","Type":"ContainerStarted","Data":"d8f1f342204e963429651eff2fc98692417c3a3e1854a1c75b376292b73f1064"} Nov 25 08:11:36 crc kubenswrapper[5043]: I1125 08:11:36.780882 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="c52938c6-c03b-4f39-b9af-4a638b0b2813" containerName="probe" containerID="cri-o://829a4dc0ff61677d6ce3631629de18e4599cbf44bf2b44266505883e56349d70" gracePeriod=30 Nov 25 08:11:37 crc kubenswrapper[5043]: I1125 08:11:37.568647 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d2829"] Nov 25 08:11:37 crc kubenswrapper[5043]: I1125 08:11:37.571392 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d2829" Nov 25 08:11:37 crc kubenswrapper[5043]: I1125 08:11:37.585350 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d2829"] Nov 25 08:11:37 crc kubenswrapper[5043]: I1125 08:11:37.683940 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jjz8\" (UniqueName: \"kubernetes.io/projected/26aaa75e-0383-4a71-90c4-2edc0196122f-kube-api-access-6jjz8\") pod \"community-operators-d2829\" (UID: \"26aaa75e-0383-4a71-90c4-2edc0196122f\") " pod="openshift-marketplace/community-operators-d2829" Nov 25 08:11:37 crc kubenswrapper[5043]: I1125 08:11:37.684018 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26aaa75e-0383-4a71-90c4-2edc0196122f-catalog-content\") pod \"community-operators-d2829\" (UID: \"26aaa75e-0383-4a71-90c4-2edc0196122f\") " pod="openshift-marketplace/community-operators-d2829" Nov 25 08:11:37 crc kubenswrapper[5043]: I1125 08:11:37.684070 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26aaa75e-0383-4a71-90c4-2edc0196122f-utilities\") pod \"community-operators-d2829\" (UID: \"26aaa75e-0383-4a71-90c4-2edc0196122f\") " pod="openshift-marketplace/community-operators-d2829" Nov 25 08:11:37 crc kubenswrapper[5043]: I1125 08:11:37.785781 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26aaa75e-0383-4a71-90c4-2edc0196122f-catalog-content\") pod \"community-operators-d2829\" (UID: \"26aaa75e-0383-4a71-90c4-2edc0196122f\") " pod="openshift-marketplace/community-operators-d2829" Nov 25 08:11:37 crc kubenswrapper[5043]: I1125 08:11:37.785856 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26aaa75e-0383-4a71-90c4-2edc0196122f-utilities\") pod \"community-operators-d2829\" (UID: \"26aaa75e-0383-4a71-90c4-2edc0196122f\") " pod="openshift-marketplace/community-operators-d2829" Nov 25 08:11:37 crc kubenswrapper[5043]: I1125 08:11:37.785974 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jjz8\" (UniqueName: \"kubernetes.io/projected/26aaa75e-0383-4a71-90c4-2edc0196122f-kube-api-access-6jjz8\") pod \"community-operators-d2829\" (UID: \"26aaa75e-0383-4a71-90c4-2edc0196122f\") " pod="openshift-marketplace/community-operators-d2829" Nov 25 08:11:37 crc kubenswrapper[5043]: I1125 08:11:37.786321 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26aaa75e-0383-4a71-90c4-2edc0196122f-catalog-content\") pod \"community-operators-d2829\" (UID: \"26aaa75e-0383-4a71-90c4-2edc0196122f\") " pod="openshift-marketplace/community-operators-d2829" Nov 25 08:11:37 crc kubenswrapper[5043]: I1125 08:11:37.786953 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26aaa75e-0383-4a71-90c4-2edc0196122f-utilities\") pod \"community-operators-d2829\" (UID: \"26aaa75e-0383-4a71-90c4-2edc0196122f\") " pod="openshift-marketplace/community-operators-d2829" Nov 25 08:11:37 crc kubenswrapper[5043]: I1125 08:11:37.790780 5043 generic.go:334] "Generic (PLEG): container finished" podID="c52938c6-c03b-4f39-b9af-4a638b0b2813" containerID="829a4dc0ff61677d6ce3631629de18e4599cbf44bf2b44266505883e56349d70" exitCode=0 Nov 25 08:11:37 crc kubenswrapper[5043]: I1125 08:11:37.790853 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"c52938c6-c03b-4f39-b9af-4a638b0b2813","Type":"ContainerDied","Data":"829a4dc0ff61677d6ce3631629de18e4599cbf44bf2b44266505883e56349d70"} Nov 25 08:11:37 crc kubenswrapper[5043]: I1125 08:11:37.792696 5043 generic.go:334] "Generic (PLEG): container finished" podID="6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a" containerID="4617dea3d893382e60d87f26af135fc92cce510d637c65acd3fb14b0e75afe7b" exitCode=0 Nov 25 08:11:37 crc kubenswrapper[5043]: I1125 08:11:37.792727 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fc62" event={"ID":"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a","Type":"ContainerDied","Data":"4617dea3d893382e60d87f26af135fc92cce510d637c65acd3fb14b0e75afe7b"} Nov 25 08:11:37 crc kubenswrapper[5043]: I1125 08:11:37.809024 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jjz8\" (UniqueName: \"kubernetes.io/projected/26aaa75e-0383-4a71-90c4-2edc0196122f-kube-api-access-6jjz8\") pod \"community-operators-d2829\" (UID: \"26aaa75e-0383-4a71-90c4-2edc0196122f\") " pod="openshift-marketplace/community-operators-d2829" Nov 25 08:11:37 crc kubenswrapper[5043]: I1125 08:11:37.890523 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d2829" Nov 25 08:11:38 crc kubenswrapper[5043]: I1125 08:11:38.416903 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d2829"] Nov 25 08:11:38 crc kubenswrapper[5043]: I1125 08:11:38.815365 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2829" event={"ID":"26aaa75e-0383-4a71-90c4-2edc0196122f","Type":"ContainerStarted","Data":"4dc9ee83269894efbb4b5d2f17080432da419953e774db4f4aa580956264c825"} Nov 25 08:11:39 crc kubenswrapper[5043]: I1125 08:11:39.828731 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d850aeb7-bd9c-41f0-91ee-b8afa951d00e","Type":"ContainerStarted","Data":"d79687d885ccd8d3987b9e459997dc82c886c7ba29cdf2e50a7e135919a58306"} Nov 25 08:11:39 crc kubenswrapper[5043]: I1125 08:11:39.830380 5043 generic.go:334] "Generic (PLEG): container finished" podID="26aaa75e-0383-4a71-90c4-2edc0196122f" containerID="ae7589b36df7daf2ae7ac1f4d094048ef5574995a370c0b5885cadb3108e8333" exitCode=0 Nov 25 08:11:39 crc kubenswrapper[5043]: I1125 08:11:39.830411 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2829" event={"ID":"26aaa75e-0383-4a71-90c4-2edc0196122f","Type":"ContainerDied","Data":"ae7589b36df7daf2ae7ac1f4d094048ef5574995a370c0b5885cadb3108e8333"} Nov 25 08:11:40 crc kubenswrapper[5043]: I1125 08:11:40.842886 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fc62" event={"ID":"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a","Type":"ContainerStarted","Data":"848247863c9c1144f746b74cace5644ccdb51fe0f8de505b713c20f990de52a7"} Nov 25 08:11:41 crc kubenswrapper[5043]: I1125 08:11:41.832665 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Nov 25 08:11:41 crc kubenswrapper[5043]: I1125 08:11:41.853633 5043 generic.go:334] "Generic (PLEG): container finished" podID="6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a" containerID="848247863c9c1144f746b74cace5644ccdb51fe0f8de505b713c20f990de52a7" exitCode=0 Nov 25 08:11:41 crc kubenswrapper[5043]: I1125 08:11:41.853731 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fc62" event={"ID":"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a","Type":"ContainerDied","Data":"848247863c9c1144f746b74cace5644ccdb51fe0f8de505b713c20f990de52a7"} Nov 25 08:11:43 crc kubenswrapper[5043]: I1125 08:11:43.881736 5043 generic.go:334] "Generic (PLEG): container finished" podID="c52938c6-c03b-4f39-b9af-4a638b0b2813" containerID="c7446a77655173882fd54e774c4e5ab17155df31778535e1e9c0d9e64cb18cde" exitCode=0 Nov 25 08:11:43 crc kubenswrapper[5043]: I1125 08:11:43.881844 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"c52938c6-c03b-4f39-b9af-4a638b0b2813","Type":"ContainerDied","Data":"c7446a77655173882fd54e774c4e5ab17155df31778535e1e9c0d9e64cb18cde"} Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.088113 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.246192 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d97d\" (UniqueName: \"kubernetes.io/projected/c52938c6-c03b-4f39-b9af-4a638b0b2813-kube-api-access-8d97d\") pod \"c52938c6-c03b-4f39-b9af-4a638b0b2813\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.246253 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-combined-ca-bundle\") pod \"c52938c6-c03b-4f39-b9af-4a638b0b2813\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.246285 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-config-data\") pod \"c52938c6-c03b-4f39-b9af-4a638b0b2813\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.246418 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c52938c6-c03b-4f39-b9af-4a638b0b2813-etc-machine-id\") pod \"c52938c6-c03b-4f39-b9af-4a638b0b2813\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.246448 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-scripts\") pod \"c52938c6-c03b-4f39-b9af-4a638b0b2813\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.246584 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-config-data-custom\") pod \"c52938c6-c03b-4f39-b9af-4a638b0b2813\" (UID: \"c52938c6-c03b-4f39-b9af-4a638b0b2813\") " Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.249141 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c52938c6-c03b-4f39-b9af-4a638b0b2813-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c52938c6-c03b-4f39-b9af-4a638b0b2813" (UID: "c52938c6-c03b-4f39-b9af-4a638b0b2813"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.261338 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c52938c6-c03b-4f39-b9af-4a638b0b2813" (UID: "c52938c6-c03b-4f39-b9af-4a638b0b2813"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.273782 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-scripts" (OuterVolumeSpecName: "scripts") pod "c52938c6-c03b-4f39-b9af-4a638b0b2813" (UID: "c52938c6-c03b-4f39-b9af-4a638b0b2813"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.310919 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c52938c6-c03b-4f39-b9af-4a638b0b2813-kube-api-access-8d97d" (OuterVolumeSpecName: "kube-api-access-8d97d") pod "c52938c6-c03b-4f39-b9af-4a638b0b2813" (UID: "c52938c6-c03b-4f39-b9af-4a638b0b2813"). InnerVolumeSpecName "kube-api-access-8d97d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.353250 5043 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c52938c6-c03b-4f39-b9af-4a638b0b2813-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.353562 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.353574 5043 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.353586 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d97d\" (UniqueName: \"kubernetes.io/projected/c52938c6-c03b-4f39-b9af-4a638b0b2813-kube-api-access-8d97d\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.450828 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c52938c6-c03b-4f39-b9af-4a638b0b2813" (UID: "c52938c6-c03b-4f39-b9af-4a638b0b2813"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.457871 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.483907 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-config-data" (OuterVolumeSpecName: "config-data") pod "c52938c6-c03b-4f39-b9af-4a638b0b2813" (UID: "c52938c6-c03b-4f39-b9af-4a638b0b2813"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.561037 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c52938c6-c03b-4f39-b9af-4a638b0b2813-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.892355 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"c52938c6-c03b-4f39-b9af-4a638b0b2813","Type":"ContainerDied","Data":"90befedf95a305f26b8179f1dfe66d262f455a19e594cc39e818fb9c45561f73"} Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.892414 5043 scope.go:117] "RemoveContainer" containerID="829a4dc0ff61677d6ce3631629de18e4599cbf44bf2b44266505883e56349d70" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.892565 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.923728 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.934370 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.950176 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 08:11:44 crc kubenswrapper[5043]: E1125 08:11:44.954804 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c52938c6-c03b-4f39-b9af-4a638b0b2813" containerName="probe" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.954877 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c52938c6-c03b-4f39-b9af-4a638b0b2813" containerName="probe" Nov 25 08:11:44 crc kubenswrapper[5043]: E1125 08:11:44.955013 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c52938c6-c03b-4f39-b9af-4a638b0b2813" containerName="manila-scheduler" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.955028 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c52938c6-c03b-4f39-b9af-4a638b0b2813" containerName="manila-scheduler" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.956437 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="c52938c6-c03b-4f39-b9af-4a638b0b2813" containerName="manila-scheduler" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.956494 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="c52938c6-c03b-4f39-b9af-4a638b0b2813" containerName="probe" Nov 25 08:11:44 crc kubenswrapper[5043]: I1125 08:11:44.993746 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:44.997009 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.026005 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c52938c6-c03b-4f39-b9af-4a638b0b2813" path="/var/lib/kubelet/pods/c52938c6-c03b-4f39-b9af-4a638b0b2813/volumes" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.026804 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.076278 5043 scope.go:117] "RemoveContainer" containerID="c7446a77655173882fd54e774c4e5ab17155df31778535e1e9c0d9e64cb18cde" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.178058 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adca59a7-f49f-443d-9201-bf7951585f6e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.178535 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adca59a7-f49f-443d-9201-bf7951585f6e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.178706 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frgng\" (UniqueName: \"kubernetes.io/projected/adca59a7-f49f-443d-9201-bf7951585f6e-kube-api-access-frgng\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.178732 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adca59a7-f49f-443d-9201-bf7951585f6e-config-data\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.178769 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adca59a7-f49f-443d-9201-bf7951585f6e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.178813 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adca59a7-f49f-443d-9201-bf7951585f6e-scripts\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.280149 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adca59a7-f49f-443d-9201-bf7951585f6e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.280313 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frgng\" (UniqueName: \"kubernetes.io/projected/adca59a7-f49f-443d-9201-bf7951585f6e-kube-api-access-frgng\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.280349 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adca59a7-f49f-443d-9201-bf7951585f6e-config-data\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.280382 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adca59a7-f49f-443d-9201-bf7951585f6e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.280426 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adca59a7-f49f-443d-9201-bf7951585f6e-scripts\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.280465 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adca59a7-f49f-443d-9201-bf7951585f6e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.280814 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adca59a7-f49f-443d-9201-bf7951585f6e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.284564 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adca59a7-f49f-443d-9201-bf7951585f6e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.284992 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adca59a7-f49f-443d-9201-bf7951585f6e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.295229 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adca59a7-f49f-443d-9201-bf7951585f6e-config-data\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.295519 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adca59a7-f49f-443d-9201-bf7951585f6e-scripts\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.301586 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frgng\" (UniqueName: \"kubernetes.io/projected/adca59a7-f49f-443d-9201-bf7951585f6e-kube-api-access-frgng\") pod \"manila-scheduler-0\" (UID: \"adca59a7-f49f-443d-9201-bf7951585f6e\") " pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.359812 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.716172 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.796081 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.986734 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="0c684bb6-c341-4487-8db1-c9c00274909a" containerName="manila-share" containerID="cri-o://f36e29b36f92d77709471e7301bfa26fe220283370125f757d75096361ace289" gracePeriod=30 Nov 25 08:11:45 crc kubenswrapper[5043]: I1125 08:11:45.987426 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="0c684bb6-c341-4487-8db1-c9c00274909a" containerName="probe" containerID="cri-o://b92e0091c393164b9e48e50d8ba9eef49424dccaf4f1e713353461c1409923b8" gracePeriod=30 Nov 25 08:11:46 crc kubenswrapper[5043]: I1125 08:11:46.346433 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.012333 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d850aeb7-bd9c-41f0-91ee-b8afa951d00e","Type":"ContainerStarted","Data":"22591924181ea01f3255d2c2695dae541a0f994dd1f6268415c311925fd35423"} Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.013031 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerName="ceilometer-central-agent" containerID="cri-o://f7f45f9526e79dfda7a817643fdc365a9cce0fe81e91cf3a271e9f109c83d1e5" gracePeriod=30 Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.013068 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerName="proxy-httpd" containerID="cri-o://22591924181ea01f3255d2c2695dae541a0f994dd1f6268415c311925fd35423" gracePeriod=30 Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.013071 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerName="sg-core" containerID="cri-o://d79687d885ccd8d3987b9e459997dc82c886c7ba29cdf2e50a7e135919a58306" gracePeriod=30 Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.013137 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerName="ceilometer-notification-agent" containerID="cri-o://a6b725bf17738523435848bcafad91fd22d3fc4d5308c7119232ebb006754ec6" gracePeriod=30 Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.013516 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.025128 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fc62" event={"ID":"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a","Type":"ContainerStarted","Data":"5b8209863989722abcaaf44b83afe4b1a73908404bf5f3d43ed9e09d3f3e263b"} Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.029456 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"adca59a7-f49f-443d-9201-bf7951585f6e","Type":"ContainerStarted","Data":"8beacce25fce10d39fc3c0793bb1aeab4e52c089a97a8536bb104bfa17bcada7"} Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.029504 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"adca59a7-f49f-443d-9201-bf7951585f6e","Type":"ContainerStarted","Data":"32e61d56792b3ab226d7bdb1aab01d31d41cbc30e373998be2e9cb1d4cd06077"} Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.032047 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2829" event={"ID":"26aaa75e-0383-4a71-90c4-2edc0196122f","Type":"ContainerStarted","Data":"d707c60e947513b4ad4967301590e2dae318ea0aba398dc1419d0bcf4978c278"} Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.037016 5043 generic.go:334] "Generic (PLEG): container finished" podID="0c684bb6-c341-4487-8db1-c9c00274909a" containerID="b92e0091c393164b9e48e50d8ba9eef49424dccaf4f1e713353461c1409923b8" exitCode=0 Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.037052 5043 generic.go:334] "Generic (PLEG): container finished" podID="0c684bb6-c341-4487-8db1-c9c00274909a" containerID="f36e29b36f92d77709471e7301bfa26fe220283370125f757d75096361ace289" exitCode=1 Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.037077 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0c684bb6-c341-4487-8db1-c9c00274909a","Type":"ContainerDied","Data":"b92e0091c393164b9e48e50d8ba9eef49424dccaf4f1e713353461c1409923b8"} Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.037107 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0c684bb6-c341-4487-8db1-c9c00274909a","Type":"ContainerDied","Data":"f36e29b36f92d77709471e7301bfa26fe220283370125f757d75096361ace289"} Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.059781 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9903202909999997 podStartE2EDuration="22.059745708s" podCreationTimestamp="2025-11-25 08:11:25 +0000 UTC" firstStartedPulling="2025-11-25 08:11:26.610405007 +0000 UTC m=+3350.778600728" lastFinishedPulling="2025-11-25 08:11:45.679830414 +0000 UTC m=+3369.848026145" observedRunningTime="2025-11-25 08:11:47.047033624 +0000 UTC m=+3371.215229365" watchObservedRunningTime="2025-11-25 08:11:47.059745708 +0000 UTC m=+3371.227941429" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.105058 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6fc62" podStartSLOduration=4.08690341 podStartE2EDuration="12.105025405s" podCreationTimestamp="2025-11-25 08:11:35 +0000 UTC" firstStartedPulling="2025-11-25 08:11:37.794963463 +0000 UTC m=+3361.963159184" lastFinishedPulling="2025-11-25 08:11:45.813085468 +0000 UTC m=+3369.981281179" observedRunningTime="2025-11-25 08:11:47.076501292 +0000 UTC m=+3371.244697013" watchObservedRunningTime="2025-11-25 08:11:47.105025405 +0000 UTC m=+3371.273221126" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.276083 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.276168 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.528135 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.580494 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w5v7\" (UniqueName: \"kubernetes.io/projected/0c684bb6-c341-4487-8db1-c9c00274909a-kube-api-access-9w5v7\") pod \"0c684bb6-c341-4487-8db1-c9c00274909a\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.580704 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-config-data\") pod \"0c684bb6-c341-4487-8db1-c9c00274909a\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.580884 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0c684bb6-c341-4487-8db1-c9c00274909a-ceph\") pod \"0c684bb6-c341-4487-8db1-c9c00274909a\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.581010 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-scripts\") pod \"0c684bb6-c341-4487-8db1-c9c00274909a\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.581053 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-combined-ca-bundle\") pod \"0c684bb6-c341-4487-8db1-c9c00274909a\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.581090 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-config-data-custom\") pod \"0c684bb6-c341-4487-8db1-c9c00274909a\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.581149 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c684bb6-c341-4487-8db1-c9c00274909a-etc-machine-id\") pod \"0c684bb6-c341-4487-8db1-c9c00274909a\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.581229 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0c684bb6-c341-4487-8db1-c9c00274909a-var-lib-manila\") pod \"0c684bb6-c341-4487-8db1-c9c00274909a\" (UID: \"0c684bb6-c341-4487-8db1-c9c00274909a\") " Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.582065 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c684bb6-c341-4487-8db1-c9c00274909a-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "0c684bb6-c341-4487-8db1-c9c00274909a" (UID: "0c684bb6-c341-4487-8db1-c9c00274909a"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.582907 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c684bb6-c341-4487-8db1-c9c00274909a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0c684bb6-c341-4487-8db1-c9c00274909a" (UID: "0c684bb6-c341-4487-8db1-c9c00274909a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.587900 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c684bb6-c341-4487-8db1-c9c00274909a-kube-api-access-9w5v7" (OuterVolumeSpecName: "kube-api-access-9w5v7") pod "0c684bb6-c341-4487-8db1-c9c00274909a" (UID: "0c684bb6-c341-4487-8db1-c9c00274909a"). InnerVolumeSpecName "kube-api-access-9w5v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.587984 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c684bb6-c341-4487-8db1-c9c00274909a-ceph" (OuterVolumeSpecName: "ceph") pod "0c684bb6-c341-4487-8db1-c9c00274909a" (UID: "0c684bb6-c341-4487-8db1-c9c00274909a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.588713 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-scripts" (OuterVolumeSpecName: "scripts") pod "0c684bb6-c341-4487-8db1-c9c00274909a" (UID: "0c684bb6-c341-4487-8db1-c9c00274909a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.590841 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0c684bb6-c341-4487-8db1-c9c00274909a" (UID: "0c684bb6-c341-4487-8db1-c9c00274909a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.653896 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c684bb6-c341-4487-8db1-c9c00274909a" (UID: "0c684bb6-c341-4487-8db1-c9c00274909a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.683584 5043 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0c684bb6-c341-4487-8db1-c9c00274909a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.683641 5043 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/0c684bb6-c341-4487-8db1-c9c00274909a-var-lib-manila\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.683651 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w5v7\" (UniqueName: \"kubernetes.io/projected/0c684bb6-c341-4487-8db1-c9c00274909a-kube-api-access-9w5v7\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.683663 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0c684bb6-c341-4487-8db1-c9c00274909a-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.683673 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.683682 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.683691 5043 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.691256 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-config-data" (OuterVolumeSpecName: "config-data") pod "0c684bb6-c341-4487-8db1-c9c00274909a" (UID: "0c684bb6-c341-4487-8db1-c9c00274909a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:47 crc kubenswrapper[5043]: I1125 08:11:47.785235 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c684bb6-c341-4487-8db1-c9c00274909a-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.054231 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"0c684bb6-c341-4487-8db1-c9c00274909a","Type":"ContainerDied","Data":"1e4908c53087cb35d096948a49554766c2483a14a0e2b148aad7858f18db6bbb"} Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.054629 5043 scope.go:117] "RemoveContainer" containerID="b92e0091c393164b9e48e50d8ba9eef49424dccaf4f1e713353461c1409923b8" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.054245 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.061916 5043 generic.go:334] "Generic (PLEG): container finished" podID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerID="22591924181ea01f3255d2c2695dae541a0f994dd1f6268415c311925fd35423" exitCode=0 Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.062018 5043 generic.go:334] "Generic (PLEG): container finished" podID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerID="d79687d885ccd8d3987b9e459997dc82c886c7ba29cdf2e50a7e135919a58306" exitCode=2 Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.062075 5043 generic.go:334] "Generic (PLEG): container finished" podID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerID="a6b725bf17738523435848bcafad91fd22d3fc4d5308c7119232ebb006754ec6" exitCode=0 Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.062132 5043 generic.go:334] "Generic (PLEG): container finished" podID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerID="f7f45f9526e79dfda7a817643fdc365a9cce0fe81e91cf3a271e9f109c83d1e5" exitCode=0 Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.062220 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d850aeb7-bd9c-41f0-91ee-b8afa951d00e","Type":"ContainerDied","Data":"22591924181ea01f3255d2c2695dae541a0f994dd1f6268415c311925fd35423"} Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.062302 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d850aeb7-bd9c-41f0-91ee-b8afa951d00e","Type":"ContainerDied","Data":"d79687d885ccd8d3987b9e459997dc82c886c7ba29cdf2e50a7e135919a58306"} Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.062373 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d850aeb7-bd9c-41f0-91ee-b8afa951d00e","Type":"ContainerDied","Data":"a6b725bf17738523435848bcafad91fd22d3fc4d5308c7119232ebb006754ec6"} Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.062440 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d850aeb7-bd9c-41f0-91ee-b8afa951d00e","Type":"ContainerDied","Data":"f7f45f9526e79dfda7a817643fdc365a9cce0fe81e91cf3a271e9f109c83d1e5"} Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.064268 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"adca59a7-f49f-443d-9201-bf7951585f6e","Type":"ContainerStarted","Data":"02d21c0cd11315c4da1c37ec21d986e5842f4e255d14d57965fc4a204e804335"} Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.089123 5043 scope.go:117] "RemoveContainer" containerID="f36e29b36f92d77709471e7301bfa26fe220283370125f757d75096361ace289" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.092515 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.092495579 podStartE2EDuration="4.092495579s" podCreationTimestamp="2025-11-25 08:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:11:48.087417531 +0000 UTC m=+3372.255613252" watchObservedRunningTime="2025-11-25 08:11:48.092495579 +0000 UTC m=+3372.260691300" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.112848 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.136890 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.175821 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 08:11:48 crc kubenswrapper[5043]: E1125 08:11:48.176504 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c684bb6-c341-4487-8db1-c9c00274909a" containerName="manila-share" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.176520 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c684bb6-c341-4487-8db1-c9c00274909a" containerName="manila-share" Nov 25 08:11:48 crc kubenswrapper[5043]: E1125 08:11:48.176554 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c684bb6-c341-4487-8db1-c9c00274909a" containerName="probe" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.176561 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c684bb6-c341-4487-8db1-c9c00274909a" containerName="probe" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.177011 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c684bb6-c341-4487-8db1-c9c00274909a" containerName="manila-share" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.177036 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c684bb6-c341-4487-8db1-c9c00274909a" containerName="probe" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.178774 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.178871 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.193809 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd0c6a41-555c-4d25-8550-8cac7501125f-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.193893 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0c6a41-555c-4d25-8550-8cac7501125f-config-data\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.193944 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fd0c6a41-555c-4d25-8550-8cac7501125f-ceph\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.193990 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/fd0c6a41-555c-4d25-8550-8cac7501125f-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.194035 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0c6a41-555c-4d25-8550-8cac7501125f-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.194084 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd0c6a41-555c-4d25-8550-8cac7501125f-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.194173 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd0c6a41-555c-4d25-8550-8cac7501125f-scripts\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.194215 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9m6n\" (UniqueName: \"kubernetes.io/projected/fd0c6a41-555c-4d25-8550-8cac7501125f-kube-api-access-b9m6n\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.197170 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.296361 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0c6a41-555c-4d25-8550-8cac7501125f-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.296440 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd0c6a41-555c-4d25-8550-8cac7501125f-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.296534 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd0c6a41-555c-4d25-8550-8cac7501125f-scripts\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.296569 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9m6n\" (UniqueName: \"kubernetes.io/projected/fd0c6a41-555c-4d25-8550-8cac7501125f-kube-api-access-b9m6n\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.296666 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd0c6a41-555c-4d25-8550-8cac7501125f-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.296708 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0c6a41-555c-4d25-8550-8cac7501125f-config-data\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.296748 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fd0c6a41-555c-4d25-8550-8cac7501125f-ceph\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.296807 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/fd0c6a41-555c-4d25-8550-8cac7501125f-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.296936 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/fd0c6a41-555c-4d25-8550-8cac7501125f-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.296993 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd0c6a41-555c-4d25-8550-8cac7501125f-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.302524 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0c6a41-555c-4d25-8550-8cac7501125f-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.302588 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd0c6a41-555c-4d25-8550-8cac7501125f-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.303812 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fd0c6a41-555c-4d25-8550-8cac7501125f-ceph\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.304758 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd0c6a41-555c-4d25-8550-8cac7501125f-scripts\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.307359 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0c6a41-555c-4d25-8550-8cac7501125f-config-data\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.331522 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9m6n\" (UniqueName: \"kubernetes.io/projected/fd0c6a41-555c-4d25-8550-8cac7501125f-kube-api-access-b9m6n\") pod \"manila-share-share1-0\" (UID: \"fd0c6a41-555c-4d25-8550-8cac7501125f\") " pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.528010 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.564018 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.705537 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-config-data\") pod \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.705759 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-scripts\") pod \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.705854 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-run-httpd\") pod \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.705887 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-sg-core-conf-yaml\") pod \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.705934 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-combined-ca-bundle\") pod \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.705953 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-ceilometer-tls-certs\") pod \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.706142 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n92w\" (UniqueName: \"kubernetes.io/projected/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-kube-api-access-7n92w\") pod \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.706180 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-log-httpd\") pod \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\" (UID: \"d850aeb7-bd9c-41f0-91ee-b8afa951d00e\") " Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.706576 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d850aeb7-bd9c-41f0-91ee-b8afa951d00e" (UID: "d850aeb7-bd9c-41f0-91ee-b8afa951d00e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.706816 5043 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.707165 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d850aeb7-bd9c-41f0-91ee-b8afa951d00e" (UID: "d850aeb7-bd9c-41f0-91ee-b8afa951d00e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.713741 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-scripts" (OuterVolumeSpecName: "scripts") pod "d850aeb7-bd9c-41f0-91ee-b8afa951d00e" (UID: "d850aeb7-bd9c-41f0-91ee-b8afa951d00e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.717302 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-kube-api-access-7n92w" (OuterVolumeSpecName: "kube-api-access-7n92w") pod "d850aeb7-bd9c-41f0-91ee-b8afa951d00e" (UID: "d850aeb7-bd9c-41f0-91ee-b8afa951d00e"). InnerVolumeSpecName "kube-api-access-7n92w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.792847 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d850aeb7-bd9c-41f0-91ee-b8afa951d00e" (UID: "d850aeb7-bd9c-41f0-91ee-b8afa951d00e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.806867 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d850aeb7-bd9c-41f0-91ee-b8afa951d00e" (UID: "d850aeb7-bd9c-41f0-91ee-b8afa951d00e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.809079 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n92w\" (UniqueName: \"kubernetes.io/projected/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-kube-api-access-7n92w\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.809101 5043 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.809109 5043 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.809117 5043 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.809126 5043 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.840738 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d850aeb7-bd9c-41f0-91ee-b8afa951d00e" (UID: "d850aeb7-bd9c-41f0-91ee-b8afa951d00e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.868728 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-config-data" (OuterVolumeSpecName: "config-data") pod "d850aeb7-bd9c-41f0-91ee-b8afa951d00e" (UID: "d850aeb7-bd9c-41f0-91ee-b8afa951d00e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.911178 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.911210 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d850aeb7-bd9c-41f0-91ee-b8afa951d00e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 08:11:48 crc kubenswrapper[5043]: I1125 08:11:48.975239 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c684bb6-c341-4487-8db1-c9c00274909a" path="/var/lib/kubelet/pods/0c684bb6-c341-4487-8db1-c9c00274909a/volumes" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.089412 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d850aeb7-bd9c-41f0-91ee-b8afa951d00e","Type":"ContainerDied","Data":"7997e170044bdb5e15473105364b33e6de2c67c59bb9c9658ed04a2a8ffed264"} Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.089863 5043 scope.go:117] "RemoveContainer" containerID="22591924181ea01f3255d2c2695dae541a0f994dd1f6268415c311925fd35423" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.089482 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.131204 5043 scope.go:117] "RemoveContainer" containerID="d79687d885ccd8d3987b9e459997dc82c886c7ba29cdf2e50a7e135919a58306" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.134672 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.146042 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.159939 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:11:49 crc kubenswrapper[5043]: E1125 08:11:49.160488 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerName="proxy-httpd" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.160508 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerName="proxy-httpd" Nov 25 08:11:49 crc kubenswrapper[5043]: E1125 08:11:49.160535 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerName="ceilometer-central-agent" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.160543 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerName="ceilometer-central-agent" Nov 25 08:11:49 crc kubenswrapper[5043]: E1125 08:11:49.160579 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerName="sg-core" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.160588 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerName="sg-core" Nov 25 08:11:49 crc kubenswrapper[5043]: E1125 08:11:49.160629 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerName="ceilometer-notification-agent" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.160639 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerName="ceilometer-notification-agent" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.160845 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerName="ceilometer-central-agent" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.160860 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerName="ceilometer-notification-agent" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.160883 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerName="sg-core" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.160899 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" containerName="proxy-httpd" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.162969 5043 scope.go:117] "RemoveContainer" containerID="a6b725bf17738523435848bcafad91fd22d3fc4d5308c7119232ebb006754ec6" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.163803 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.170798 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.170883 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.171122 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.178761 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.190134 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 25 08:11:49 crc kubenswrapper[5043]: W1125 08:11:49.208424 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd0c6a41_555c_4d25_8550_8cac7501125f.slice/crio-a75bcc7e66ac8238d43a9ba30d12b47407f3365403508de228a0addb4478175c WatchSource:0}: Error finding container a75bcc7e66ac8238d43a9ba30d12b47407f3365403508de228a0addb4478175c: Status 404 returned error can't find the container with id a75bcc7e66ac8238d43a9ba30d12b47407f3365403508de228a0addb4478175c Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.229267 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33767a71-28b3-4d66-9f8a-4723e69cf860-run-httpd\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.229312 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33767a71-28b3-4d66-9f8a-4723e69cf860-log-httpd\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.229363 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33767a71-28b3-4d66-9f8a-4723e69cf860-config-data\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.229386 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33767a71-28b3-4d66-9f8a-4723e69cf860-scripts\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.229461 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6nkr\" (UniqueName: \"kubernetes.io/projected/33767a71-28b3-4d66-9f8a-4723e69cf860-kube-api-access-n6nkr\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.229522 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/33767a71-28b3-4d66-9f8a-4723e69cf860-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.229554 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33767a71-28b3-4d66-9f8a-4723e69cf860-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.229700 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33767a71-28b3-4d66-9f8a-4723e69cf860-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.240030 5043 scope.go:117] "RemoveContainer" containerID="f7f45f9526e79dfda7a817643fdc365a9cce0fe81e91cf3a271e9f109c83d1e5" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.335424 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33767a71-28b3-4d66-9f8a-4723e69cf860-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.335594 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33767a71-28b3-4d66-9f8a-4723e69cf860-run-httpd\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.335639 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33767a71-28b3-4d66-9f8a-4723e69cf860-log-httpd\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.335666 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33767a71-28b3-4d66-9f8a-4723e69cf860-config-data\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.335694 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33767a71-28b3-4d66-9f8a-4723e69cf860-scripts\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.335736 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6nkr\" (UniqueName: \"kubernetes.io/projected/33767a71-28b3-4d66-9f8a-4723e69cf860-kube-api-access-n6nkr\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.335769 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/33767a71-28b3-4d66-9f8a-4723e69cf860-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.335789 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33767a71-28b3-4d66-9f8a-4723e69cf860-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.340662 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33767a71-28b3-4d66-9f8a-4723e69cf860-run-httpd\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.340766 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33767a71-28b3-4d66-9f8a-4723e69cf860-log-httpd\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.346853 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33767a71-28b3-4d66-9f8a-4723e69cf860-config-data\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.347508 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33767a71-28b3-4d66-9f8a-4723e69cf860-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.352121 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/33767a71-28b3-4d66-9f8a-4723e69cf860-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.352275 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33767a71-28b3-4d66-9f8a-4723e69cf860-scripts\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.355594 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33767a71-28b3-4d66-9f8a-4723e69cf860-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.364519 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6nkr\" (UniqueName: \"kubernetes.io/projected/33767a71-28b3-4d66-9f8a-4723e69cf860-kube-api-access-n6nkr\") pod \"ceilometer-0\" (UID: \"33767a71-28b3-4d66-9f8a-4723e69cf860\") " pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.488743 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 08:11:49 crc kubenswrapper[5043]: I1125 08:11:49.996307 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 08:11:50 crc kubenswrapper[5043]: I1125 08:11:50.008729 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:11:50 crc kubenswrapper[5043]: W1125 08:11:50.008226 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33767a71_28b3_4d66_9f8a_4723e69cf860.slice/crio-41962be19f9311d600efbe8992fa53126b96fab09c00c51002605eaa5cde028b WatchSource:0}: Error finding container 41962be19f9311d600efbe8992fa53126b96fab09c00c51002605eaa5cde028b: Status 404 returned error can't find the container with id 41962be19f9311d600efbe8992fa53126b96fab09c00c51002605eaa5cde028b Nov 25 08:11:50 crc kubenswrapper[5043]: I1125 08:11:50.103007 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33767a71-28b3-4d66-9f8a-4723e69cf860","Type":"ContainerStarted","Data":"41962be19f9311d600efbe8992fa53126b96fab09c00c51002605eaa5cde028b"} Nov 25 08:11:50 crc kubenswrapper[5043]: I1125 08:11:50.104629 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"fd0c6a41-555c-4d25-8550-8cac7501125f","Type":"ContainerStarted","Data":"a75bcc7e66ac8238d43a9ba30d12b47407f3365403508de228a0addb4478175c"} Nov 25 08:11:50 crc kubenswrapper[5043]: I1125 08:11:50.976342 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d850aeb7-bd9c-41f0-91ee-b8afa951d00e" path="/var/lib/kubelet/pods/d850aeb7-bd9c-41f0-91ee-b8afa951d00e/volumes" Nov 25 08:11:51 crc kubenswrapper[5043]: I1125 08:11:51.119749 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"fd0c6a41-555c-4d25-8550-8cac7501125f","Type":"ContainerStarted","Data":"829e118e6437ed621487dafe0d281774f7129a818cc7c49b3014bc1dac32d22e"} Nov 25 08:11:53 crc kubenswrapper[5043]: I1125 08:11:53.139892 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"fd0c6a41-555c-4d25-8550-8cac7501125f","Type":"ContainerStarted","Data":"59e3215d138c0b17e5c33c9cffafc2c9bb1f72e71f0b0f1f6df1937b1c5541d6"} Nov 25 08:11:54 crc kubenswrapper[5043]: I1125 08:11:54.185702 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=6.185677472 podStartE2EDuration="6.185677472s" podCreationTimestamp="2025-11-25 08:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:11:54.179221088 +0000 UTC m=+3378.347416839" watchObservedRunningTime="2025-11-25 08:11:54.185677472 +0000 UTC m=+3378.353873193" Nov 25 08:11:55 crc kubenswrapper[5043]: I1125 08:11:55.361806 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 25 08:11:55 crc kubenswrapper[5043]: I1125 08:11:55.486972 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6fc62" Nov 25 08:11:55 crc kubenswrapper[5043]: I1125 08:11:55.487311 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6fc62" Nov 25 08:11:55 crc kubenswrapper[5043]: I1125 08:11:55.535724 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6fc62" Nov 25 08:11:56 crc kubenswrapper[5043]: I1125 08:11:56.171561 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33767a71-28b3-4d66-9f8a-4723e69cf860","Type":"ContainerStarted","Data":"8321d018a6aa7630b040bf73e604413b7e857594a1d7796311a1a42564995251"} Nov 25 08:11:56 crc kubenswrapper[5043]: I1125 08:11:56.224230 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6fc62" Nov 25 08:11:56 crc kubenswrapper[5043]: I1125 08:11:56.272877 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6fc62"] Nov 25 08:11:58 crc kubenswrapper[5043]: I1125 08:11:58.190343 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6fc62" podUID="6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a" containerName="registry-server" containerID="cri-o://5b8209863989722abcaaf44b83afe4b1a73908404bf5f3d43ed9e09d3f3e263b" gracePeriod=2 Nov 25 08:11:58 crc kubenswrapper[5043]: I1125 08:11:58.528331 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 25 08:11:59 crc kubenswrapper[5043]: I1125 08:11:59.202115 5043 generic.go:334] "Generic (PLEG): container finished" podID="6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a" containerID="5b8209863989722abcaaf44b83afe4b1a73908404bf5f3d43ed9e09d3f3e263b" exitCode=0 Nov 25 08:11:59 crc kubenswrapper[5043]: I1125 08:11:59.202287 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fc62" event={"ID":"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a","Type":"ContainerDied","Data":"5b8209863989722abcaaf44b83afe4b1a73908404bf5f3d43ed9e09d3f3e263b"} Nov 25 08:11:59 crc kubenswrapper[5043]: I1125 08:11:59.202439 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fc62" event={"ID":"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a","Type":"ContainerDied","Data":"d8f1f342204e963429651eff2fc98692417c3a3e1854a1c75b376292b73f1064"} Nov 25 08:11:59 crc kubenswrapper[5043]: I1125 08:11:59.202450 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8f1f342204e963429651eff2fc98692417c3a3e1854a1c75b376292b73f1064" Nov 25 08:11:59 crc kubenswrapper[5043]: I1125 08:11:59.205706 5043 generic.go:334] "Generic (PLEG): container finished" podID="26aaa75e-0383-4a71-90c4-2edc0196122f" containerID="d707c60e947513b4ad4967301590e2dae318ea0aba398dc1419d0bcf4978c278" exitCode=0 Nov 25 08:11:59 crc kubenswrapper[5043]: I1125 08:11:59.205734 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2829" event={"ID":"26aaa75e-0383-4a71-90c4-2edc0196122f","Type":"ContainerDied","Data":"d707c60e947513b4ad4967301590e2dae318ea0aba398dc1419d0bcf4978c278"} Nov 25 08:12:00 crc kubenswrapper[5043]: I1125 08:11:59.275680 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fc62" Nov 25 08:12:00 crc kubenswrapper[5043]: I1125 08:11:59.388350 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-utilities\") pod \"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a\" (UID: \"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a\") " Nov 25 08:12:00 crc kubenswrapper[5043]: I1125 08:11:59.388397 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhvn9\" (UniqueName: \"kubernetes.io/projected/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-kube-api-access-bhvn9\") pod \"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a\" (UID: \"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a\") " Nov 25 08:12:00 crc kubenswrapper[5043]: I1125 08:11:59.388442 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-catalog-content\") pod \"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a\" (UID: \"6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a\") " Nov 25 08:12:00 crc kubenswrapper[5043]: I1125 08:11:59.389210 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-utilities" (OuterVolumeSpecName: "utilities") pod "6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a" (UID: "6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:12:00 crc kubenswrapper[5043]: I1125 08:11:59.394396 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-kube-api-access-bhvn9" (OuterVolumeSpecName: "kube-api-access-bhvn9") pod "6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a" (UID: "6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a"). InnerVolumeSpecName "kube-api-access-bhvn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:12:00 crc kubenswrapper[5043]: I1125 08:11:59.473163 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a" (UID: "6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:12:00 crc kubenswrapper[5043]: I1125 08:11:59.490902 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:12:00 crc kubenswrapper[5043]: I1125 08:11:59.490934 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhvn9\" (UniqueName: \"kubernetes.io/projected/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-kube-api-access-bhvn9\") on node \"crc\" DevicePath \"\"" Nov 25 08:12:00 crc kubenswrapper[5043]: I1125 08:11:59.490947 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:12:00 crc kubenswrapper[5043]: I1125 08:12:00.224744 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fc62" Nov 25 08:12:00 crc kubenswrapper[5043]: I1125 08:12:00.225004 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33767a71-28b3-4d66-9f8a-4723e69cf860","Type":"ContainerStarted","Data":"ade2f09fb1c1dcd1777d8fa323f5570b218a7a46ea2751a1d6ce165ff0b262c6"} Nov 25 08:12:00 crc kubenswrapper[5043]: I1125 08:12:00.274120 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6fc62"] Nov 25 08:12:00 crc kubenswrapper[5043]: I1125 08:12:00.284968 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6fc62"] Nov 25 08:12:00 crc kubenswrapper[5043]: I1125 08:12:00.974073 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a" path="/var/lib/kubelet/pods/6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a/volumes" Nov 25 08:12:01 crc kubenswrapper[5043]: I1125 08:12:01.235466 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2829" event={"ID":"26aaa75e-0383-4a71-90c4-2edc0196122f","Type":"ContainerStarted","Data":"4f71451f4ec5582420197df32202d3bd8db5ce6dd9068762ce296772d555db50"} Nov 25 08:12:01 crc kubenswrapper[5043]: I1125 08:12:01.238727 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33767a71-28b3-4d66-9f8a-4723e69cf860","Type":"ContainerStarted","Data":"bec2b3ccab5ae92c3c5e616baff6744cd0374c12d5a5fd3bae464680dfa11347"} Nov 25 08:12:01 crc kubenswrapper[5043]: I1125 08:12:01.260465 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d2829" podStartSLOduration=4.149840664 podStartE2EDuration="24.260441511s" podCreationTimestamp="2025-11-25 08:11:37 +0000 UTC" firstStartedPulling="2025-11-25 08:11:39.976298105 +0000 UTC m=+3364.144493826" lastFinishedPulling="2025-11-25 08:12:00.086898952 +0000 UTC m=+3384.255094673" observedRunningTime="2025-11-25 08:12:01.254956702 +0000 UTC m=+3385.423152423" watchObservedRunningTime="2025-11-25 08:12:01.260441511 +0000 UTC m=+3385.428637232" Nov 25 08:12:04 crc kubenswrapper[5043]: I1125 08:12:04.266884 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33767a71-28b3-4d66-9f8a-4723e69cf860","Type":"ContainerStarted","Data":"cbb6c3f04c05e0ca37abdb957a4bac55fdd506e490ef180504d1a5602a2bb786"} Nov 25 08:12:04 crc kubenswrapper[5043]: I1125 08:12:04.269290 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 08:12:04 crc kubenswrapper[5043]: I1125 08:12:04.306655 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.040655272 podStartE2EDuration="15.306638642s" podCreationTimestamp="2025-11-25 08:11:49 +0000 UTC" firstStartedPulling="2025-11-25 08:11:50.008362804 +0000 UTC m=+3374.176558525" lastFinishedPulling="2025-11-25 08:12:03.274346174 +0000 UTC m=+3387.442541895" observedRunningTime="2025-11-25 08:12:04.289042125 +0000 UTC m=+3388.457237846" watchObservedRunningTime="2025-11-25 08:12:04.306638642 +0000 UTC m=+3388.474834363" Nov 25 08:12:07 crc kubenswrapper[5043]: I1125 08:12:07.052928 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 25 08:12:07 crc kubenswrapper[5043]: I1125 08:12:07.890691 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d2829" Nov 25 08:12:07 crc kubenswrapper[5043]: I1125 08:12:07.891045 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d2829" Nov 25 08:12:08 crc kubenswrapper[5043]: I1125 08:12:08.942334 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-d2829" podUID="26aaa75e-0383-4a71-90c4-2edc0196122f" containerName="registry-server" probeResult="failure" output=< Nov 25 08:12:08 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 08:12:08 crc kubenswrapper[5043]: > Nov 25 08:12:10 crc kubenswrapper[5043]: I1125 08:12:10.048970 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 25 08:12:17 crc kubenswrapper[5043]: I1125 08:12:17.276905 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:12:17 crc kubenswrapper[5043]: I1125 08:12:17.278149 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:12:17 crc kubenswrapper[5043]: I1125 08:12:17.935135 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d2829" Nov 25 08:12:17 crc kubenswrapper[5043]: I1125 08:12:17.979799 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d2829" Nov 25 08:12:18 crc kubenswrapper[5043]: I1125 08:12:18.174438 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d2829"] Nov 25 08:12:19 crc kubenswrapper[5043]: I1125 08:12:19.411504 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d2829" podUID="26aaa75e-0383-4a71-90c4-2edc0196122f" containerName="registry-server" containerID="cri-o://4f71451f4ec5582420197df32202d3bd8db5ce6dd9068762ce296772d555db50" gracePeriod=2 Nov 25 08:12:19 crc kubenswrapper[5043]: I1125 08:12:19.496478 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 08:12:19 crc kubenswrapper[5043]: I1125 08:12:19.911181 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d2829" Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.024416 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26aaa75e-0383-4a71-90c4-2edc0196122f-catalog-content\") pod \"26aaa75e-0383-4a71-90c4-2edc0196122f\" (UID: \"26aaa75e-0383-4a71-90c4-2edc0196122f\") " Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.024698 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26aaa75e-0383-4a71-90c4-2edc0196122f-utilities\") pod \"26aaa75e-0383-4a71-90c4-2edc0196122f\" (UID: \"26aaa75e-0383-4a71-90c4-2edc0196122f\") " Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.024854 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jjz8\" (UniqueName: \"kubernetes.io/projected/26aaa75e-0383-4a71-90c4-2edc0196122f-kube-api-access-6jjz8\") pod \"26aaa75e-0383-4a71-90c4-2edc0196122f\" (UID: \"26aaa75e-0383-4a71-90c4-2edc0196122f\") " Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.025402 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26aaa75e-0383-4a71-90c4-2edc0196122f-utilities" (OuterVolumeSpecName: "utilities") pod "26aaa75e-0383-4a71-90c4-2edc0196122f" (UID: "26aaa75e-0383-4a71-90c4-2edc0196122f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.028462 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26aaa75e-0383-4a71-90c4-2edc0196122f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.035575 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26aaa75e-0383-4a71-90c4-2edc0196122f-kube-api-access-6jjz8" (OuterVolumeSpecName: "kube-api-access-6jjz8") pod "26aaa75e-0383-4a71-90c4-2edc0196122f" (UID: "26aaa75e-0383-4a71-90c4-2edc0196122f"). InnerVolumeSpecName "kube-api-access-6jjz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.078390 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26aaa75e-0383-4a71-90c4-2edc0196122f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26aaa75e-0383-4a71-90c4-2edc0196122f" (UID: "26aaa75e-0383-4a71-90c4-2edc0196122f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.130956 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26aaa75e-0383-4a71-90c4-2edc0196122f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.131284 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jjz8\" (UniqueName: \"kubernetes.io/projected/26aaa75e-0383-4a71-90c4-2edc0196122f-kube-api-access-6jjz8\") on node \"crc\" DevicePath \"\"" Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.421232 5043 generic.go:334] "Generic (PLEG): container finished" podID="26aaa75e-0383-4a71-90c4-2edc0196122f" containerID="4f71451f4ec5582420197df32202d3bd8db5ce6dd9068762ce296772d555db50" exitCode=0 Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.421284 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d2829" Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.421300 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2829" event={"ID":"26aaa75e-0383-4a71-90c4-2edc0196122f","Type":"ContainerDied","Data":"4f71451f4ec5582420197df32202d3bd8db5ce6dd9068762ce296772d555db50"} Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.421737 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2829" event={"ID":"26aaa75e-0383-4a71-90c4-2edc0196122f","Type":"ContainerDied","Data":"4dc9ee83269894efbb4b5d2f17080432da419953e774db4f4aa580956264c825"} Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.421784 5043 scope.go:117] "RemoveContainer" containerID="4f71451f4ec5582420197df32202d3bd8db5ce6dd9068762ce296772d555db50" Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.443469 5043 scope.go:117] "RemoveContainer" containerID="d707c60e947513b4ad4967301590e2dae318ea0aba398dc1419d0bcf4978c278" Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.464486 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d2829"] Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.475080 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d2829"] Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.490539 5043 scope.go:117] "RemoveContainer" containerID="ae7589b36df7daf2ae7ac1f4d094048ef5574995a370c0b5885cadb3108e8333" Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.511890 5043 scope.go:117] "RemoveContainer" containerID="4f71451f4ec5582420197df32202d3bd8db5ce6dd9068762ce296772d555db50" Nov 25 08:12:20 crc kubenswrapper[5043]: E1125 08:12:20.512411 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f71451f4ec5582420197df32202d3bd8db5ce6dd9068762ce296772d555db50\": container with ID starting with 4f71451f4ec5582420197df32202d3bd8db5ce6dd9068762ce296772d555db50 not found: ID does not exist" containerID="4f71451f4ec5582420197df32202d3bd8db5ce6dd9068762ce296772d555db50" Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.512469 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f71451f4ec5582420197df32202d3bd8db5ce6dd9068762ce296772d555db50"} err="failed to get container status \"4f71451f4ec5582420197df32202d3bd8db5ce6dd9068762ce296772d555db50\": rpc error: code = NotFound desc = could not find container \"4f71451f4ec5582420197df32202d3bd8db5ce6dd9068762ce296772d555db50\": container with ID starting with 4f71451f4ec5582420197df32202d3bd8db5ce6dd9068762ce296772d555db50 not found: ID does not exist" Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.512500 5043 scope.go:117] "RemoveContainer" containerID="d707c60e947513b4ad4967301590e2dae318ea0aba398dc1419d0bcf4978c278" Nov 25 08:12:20 crc kubenswrapper[5043]: E1125 08:12:20.513019 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d707c60e947513b4ad4967301590e2dae318ea0aba398dc1419d0bcf4978c278\": container with ID starting with d707c60e947513b4ad4967301590e2dae318ea0aba398dc1419d0bcf4978c278 not found: ID does not exist" containerID="d707c60e947513b4ad4967301590e2dae318ea0aba398dc1419d0bcf4978c278" Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.513118 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d707c60e947513b4ad4967301590e2dae318ea0aba398dc1419d0bcf4978c278"} err="failed to get container status \"d707c60e947513b4ad4967301590e2dae318ea0aba398dc1419d0bcf4978c278\": rpc error: code = NotFound desc = could not find container \"d707c60e947513b4ad4967301590e2dae318ea0aba398dc1419d0bcf4978c278\": container with ID starting with d707c60e947513b4ad4967301590e2dae318ea0aba398dc1419d0bcf4978c278 not found: ID does not exist" Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.513234 5043 scope.go:117] "RemoveContainer" containerID="ae7589b36df7daf2ae7ac1f4d094048ef5574995a370c0b5885cadb3108e8333" Nov 25 08:12:20 crc kubenswrapper[5043]: E1125 08:12:20.513631 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae7589b36df7daf2ae7ac1f4d094048ef5574995a370c0b5885cadb3108e8333\": container with ID starting with ae7589b36df7daf2ae7ac1f4d094048ef5574995a370c0b5885cadb3108e8333 not found: ID does not exist" containerID="ae7589b36df7daf2ae7ac1f4d094048ef5574995a370c0b5885cadb3108e8333" Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.513685 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae7589b36df7daf2ae7ac1f4d094048ef5574995a370c0b5885cadb3108e8333"} err="failed to get container status \"ae7589b36df7daf2ae7ac1f4d094048ef5574995a370c0b5885cadb3108e8333\": rpc error: code = NotFound desc = could not find container \"ae7589b36df7daf2ae7ac1f4d094048ef5574995a370c0b5885cadb3108e8333\": container with ID starting with ae7589b36df7daf2ae7ac1f4d094048ef5574995a370c0b5885cadb3108e8333 not found: ID does not exist" Nov 25 08:12:20 crc kubenswrapper[5043]: I1125 08:12:20.976395 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26aaa75e-0383-4a71-90c4-2edc0196122f" path="/var/lib/kubelet/pods/26aaa75e-0383-4a71-90c4-2edc0196122f/volumes" Nov 25 08:12:47 crc kubenswrapper[5043]: I1125 08:12:47.276317 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:12:47 crc kubenswrapper[5043]: I1125 08:12:47.276866 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:12:47 crc kubenswrapper[5043]: I1125 08:12:47.276920 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 08:12:47 crc kubenswrapper[5043]: I1125 08:12:47.277596 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18343daeda377d7ada554d8d6723fd48b9d69c9bb0127c1bc82ab8875fb2058e"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:12:47 crc kubenswrapper[5043]: I1125 08:12:47.277673 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://18343daeda377d7ada554d8d6723fd48b9d69c9bb0127c1bc82ab8875fb2058e" gracePeriod=600 Nov 25 08:12:47 crc kubenswrapper[5043]: I1125 08:12:47.642974 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="18343daeda377d7ada554d8d6723fd48b9d69c9bb0127c1bc82ab8875fb2058e" exitCode=0 Nov 25 08:12:47 crc kubenswrapper[5043]: I1125 08:12:47.643084 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"18343daeda377d7ada554d8d6723fd48b9d69c9bb0127c1bc82ab8875fb2058e"} Nov 25 08:12:47 crc kubenswrapper[5043]: I1125 08:12:47.643293 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf"} Nov 25 08:12:47 crc kubenswrapper[5043]: I1125 08:12:47.643316 5043 scope.go:117] "RemoveContainer" containerID="ed0a4122e59aeef96222b14f683ea0069067f81a856c2780e038e58116b5f2ee" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.532068 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g8hd9"] Nov 25 08:12:59 crc kubenswrapper[5043]: E1125 08:12:59.533075 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26aaa75e-0383-4a71-90c4-2edc0196122f" containerName="extract-utilities" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.533158 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="26aaa75e-0383-4a71-90c4-2edc0196122f" containerName="extract-utilities" Nov 25 08:12:59 crc kubenswrapper[5043]: E1125 08:12:59.533180 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a" containerName="registry-server" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.533186 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a" containerName="registry-server" Nov 25 08:12:59 crc kubenswrapper[5043]: E1125 08:12:59.533199 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a" containerName="extract-utilities" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.533205 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a" containerName="extract-utilities" Nov 25 08:12:59 crc kubenswrapper[5043]: E1125 08:12:59.533226 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26aaa75e-0383-4a71-90c4-2edc0196122f" containerName="extract-content" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.533231 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="26aaa75e-0383-4a71-90c4-2edc0196122f" containerName="extract-content" Nov 25 08:12:59 crc kubenswrapper[5043]: E1125 08:12:59.533241 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a" containerName="extract-content" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.533246 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a" containerName="extract-content" Nov 25 08:12:59 crc kubenswrapper[5043]: E1125 08:12:59.533261 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26aaa75e-0383-4a71-90c4-2edc0196122f" containerName="registry-server" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.533267 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="26aaa75e-0383-4a71-90c4-2edc0196122f" containerName="registry-server" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.533503 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="26aaa75e-0383-4a71-90c4-2edc0196122f" containerName="registry-server" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.533522 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f7701e2-2bae-49d0-8cd9-cb319e7e0e3a" containerName="registry-server" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.535153 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8hd9" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.547787 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8hd9"] Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.575889 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4x2v\" (UniqueName: \"kubernetes.io/projected/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-kube-api-access-j4x2v\") pod \"redhat-marketplace-g8hd9\" (UID: \"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3\") " pod="openshift-marketplace/redhat-marketplace-g8hd9" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.575984 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-utilities\") pod \"redhat-marketplace-g8hd9\" (UID: \"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3\") " pod="openshift-marketplace/redhat-marketplace-g8hd9" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.576060 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-catalog-content\") pod \"redhat-marketplace-g8hd9\" (UID: \"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3\") " pod="openshift-marketplace/redhat-marketplace-g8hd9" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.677793 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4x2v\" (UniqueName: \"kubernetes.io/projected/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-kube-api-access-j4x2v\") pod \"redhat-marketplace-g8hd9\" (UID: \"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3\") " pod="openshift-marketplace/redhat-marketplace-g8hd9" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.677891 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-utilities\") pod \"redhat-marketplace-g8hd9\" (UID: \"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3\") " pod="openshift-marketplace/redhat-marketplace-g8hd9" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.677959 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-catalog-content\") pod \"redhat-marketplace-g8hd9\" (UID: \"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3\") " pod="openshift-marketplace/redhat-marketplace-g8hd9" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.678560 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-utilities\") pod \"redhat-marketplace-g8hd9\" (UID: \"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3\") " pod="openshift-marketplace/redhat-marketplace-g8hd9" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.678569 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-catalog-content\") pod \"redhat-marketplace-g8hd9\" (UID: \"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3\") " pod="openshift-marketplace/redhat-marketplace-g8hd9" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.704910 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4x2v\" (UniqueName: \"kubernetes.io/projected/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-kube-api-access-j4x2v\") pod \"redhat-marketplace-g8hd9\" (UID: \"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3\") " pod="openshift-marketplace/redhat-marketplace-g8hd9" Nov 25 08:12:59 crc kubenswrapper[5043]: I1125 08:12:59.857745 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8hd9" Nov 25 08:13:00 crc kubenswrapper[5043]: I1125 08:13:00.334444 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8hd9"] Nov 25 08:13:00 crc kubenswrapper[5043]: I1125 08:13:00.645021 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-54c548f75b-mk6ml"] Nov 25 08:13:00 crc kubenswrapper[5043]: I1125 08:13:00.647049 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-54c548f75b-mk6ml" Nov 25 08:13:00 crc kubenswrapper[5043]: I1125 08:13:00.695388 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-54c548f75b-mk6ml"] Nov 25 08:13:00 crc kubenswrapper[5043]: I1125 08:13:00.702430 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b696m\" (UniqueName: \"kubernetes.io/projected/d845a43d-ee06-454f-b68d-cdb949cecffe-kube-api-access-b696m\") pod \"openstack-operator-controller-operator-54c548f75b-mk6ml\" (UID: \"d845a43d-ee06-454f-b68d-cdb949cecffe\") " pod="openstack-operators/openstack-operator-controller-operator-54c548f75b-mk6ml" Nov 25 08:13:00 crc kubenswrapper[5043]: I1125 08:13:00.787196 5043 generic.go:334] "Generic (PLEG): container finished" podID="1678e66e-fcb2-4cbb-b49b-0d7d485eedd3" containerID="4c39a2c6c0fc8999be81f1bc512e222ae93d0af8492ff4660e549dbe92acda71" exitCode=0 Nov 25 08:13:00 crc kubenswrapper[5043]: I1125 08:13:00.787252 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8hd9" event={"ID":"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3","Type":"ContainerDied","Data":"4c39a2c6c0fc8999be81f1bc512e222ae93d0af8492ff4660e549dbe92acda71"} Nov 25 08:13:00 crc kubenswrapper[5043]: I1125 08:13:00.787283 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8hd9" event={"ID":"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3","Type":"ContainerStarted","Data":"652af379a334e51f732ced505df037cd932cb3dba2149783ef5639febafbe1e5"} Nov 25 08:13:00 crc kubenswrapper[5043]: I1125 08:13:00.804664 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b696m\" (UniqueName: \"kubernetes.io/projected/d845a43d-ee06-454f-b68d-cdb949cecffe-kube-api-access-b696m\") pod \"openstack-operator-controller-operator-54c548f75b-mk6ml\" (UID: \"d845a43d-ee06-454f-b68d-cdb949cecffe\") " pod="openstack-operators/openstack-operator-controller-operator-54c548f75b-mk6ml" Nov 25 08:13:00 crc kubenswrapper[5043]: I1125 08:13:00.828962 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b696m\" (UniqueName: \"kubernetes.io/projected/d845a43d-ee06-454f-b68d-cdb949cecffe-kube-api-access-b696m\") pod \"openstack-operator-controller-operator-54c548f75b-mk6ml\" (UID: \"d845a43d-ee06-454f-b68d-cdb949cecffe\") " pod="openstack-operators/openstack-operator-controller-operator-54c548f75b-mk6ml" Nov 25 08:13:00 crc kubenswrapper[5043]: I1125 08:13:00.966937 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-54c548f75b-mk6ml" Nov 25 08:13:01 crc kubenswrapper[5043]: I1125 08:13:01.415449 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-54c548f75b-mk6ml"] Nov 25 08:13:01 crc kubenswrapper[5043]: W1125 08:13:01.427861 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd845a43d_ee06_454f_b68d_cdb949cecffe.slice/crio-8d1a387a63a06b98b4ddb1062e9c7eb5b962778392908b10ae11ba76fd8b9ea9 WatchSource:0}: Error finding container 8d1a387a63a06b98b4ddb1062e9c7eb5b962778392908b10ae11ba76fd8b9ea9: Status 404 returned error can't find the container with id 8d1a387a63a06b98b4ddb1062e9c7eb5b962778392908b10ae11ba76fd8b9ea9 Nov 25 08:13:01 crc kubenswrapper[5043]: I1125 08:13:01.796800 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-54c548f75b-mk6ml" event={"ID":"d845a43d-ee06-454f-b68d-cdb949cecffe","Type":"ContainerStarted","Data":"11c68e8a1546e0b3e4e2f97896785e727bd5de02fbb4f3be6d5b49405ed7b720"} Nov 25 08:13:01 crc kubenswrapper[5043]: I1125 08:13:01.797149 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-54c548f75b-mk6ml" Nov 25 08:13:01 crc kubenswrapper[5043]: I1125 08:13:01.797164 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-54c548f75b-mk6ml" event={"ID":"d845a43d-ee06-454f-b68d-cdb949cecffe","Type":"ContainerStarted","Data":"8d1a387a63a06b98b4ddb1062e9c7eb5b962778392908b10ae11ba76fd8b9ea9"} Nov 25 08:13:01 crc kubenswrapper[5043]: I1125 08:13:01.798877 5043 generic.go:334] "Generic (PLEG): container finished" podID="1678e66e-fcb2-4cbb-b49b-0d7d485eedd3" containerID="9d4d81d7bc29b2912afc88f02972a17b6a00cc49455a88ffbd5ab5d5d29007df" exitCode=0 Nov 25 08:13:01 crc kubenswrapper[5043]: I1125 08:13:01.798918 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8hd9" event={"ID":"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3","Type":"ContainerDied","Data":"9d4d81d7bc29b2912afc88f02972a17b6a00cc49455a88ffbd5ab5d5d29007df"} Nov 25 08:13:01 crc kubenswrapper[5043]: I1125 08:13:01.828411 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-54c548f75b-mk6ml" podStartSLOduration=1.8283960559999999 podStartE2EDuration="1.828396056s" podCreationTimestamp="2025-11-25 08:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:13:01.823187485 +0000 UTC m=+3445.991383216" watchObservedRunningTime="2025-11-25 08:13:01.828396056 +0000 UTC m=+3445.996591777" Nov 25 08:13:02 crc kubenswrapper[5043]: I1125 08:13:02.810585 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8hd9" event={"ID":"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3","Type":"ContainerStarted","Data":"c450ee18a840aac909544933be071fc212eee480daedf7c85e5768309b4745b9"} Nov 25 08:13:02 crc kubenswrapper[5043]: I1125 08:13:02.838335 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g8hd9" podStartSLOduration=2.32769305 podStartE2EDuration="3.838312765s" podCreationTimestamp="2025-11-25 08:12:59 +0000 UTC" firstStartedPulling="2025-11-25 08:13:00.788754765 +0000 UTC m=+3444.956950486" lastFinishedPulling="2025-11-25 08:13:02.29937446 +0000 UTC m=+3446.467570201" observedRunningTime="2025-11-25 08:13:02.835433828 +0000 UTC m=+3447.003629569" watchObservedRunningTime="2025-11-25 08:13:02.838312765 +0000 UTC m=+3447.006508486" Nov 25 08:13:09 crc kubenswrapper[5043]: I1125 08:13:09.858535 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g8hd9" Nov 25 08:13:09 crc kubenswrapper[5043]: I1125 08:13:09.859138 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g8hd9" Nov 25 08:13:09 crc kubenswrapper[5043]: I1125 08:13:09.912628 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g8hd9" Nov 25 08:13:09 crc kubenswrapper[5043]: I1125 08:13:09.966126 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g8hd9" Nov 25 08:13:10 crc kubenswrapper[5043]: I1125 08:13:10.147616 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8hd9"] Nov 25 08:13:10 crc kubenswrapper[5043]: I1125 08:13:10.972439 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-54c548f75b-mk6ml" Nov 25 08:13:11 crc kubenswrapper[5043]: I1125 08:13:11.053321 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p"] Nov 25 08:13:11 crc kubenswrapper[5043]: I1125 08:13:11.053523 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p" podUID="30018f1d-11c7-4b61-b5a3-60b8f9848f29" containerName="operator" containerID="cri-o://aa63945023a730072548b5ce371eec5b064d761ffe23644a59b69c652b675ec3" gracePeriod=10 Nov 25 08:13:11 crc kubenswrapper[5043]: I1125 08:13:11.250544 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p" podUID="30018f1d-11c7-4b61-b5a3-60b8f9848f29" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": dial tcp 10.217.0.71:8081: connect: connection refused" Nov 25 08:13:11 crc kubenswrapper[5043]: I1125 08:13:11.595952 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p" Nov 25 08:13:11 crc kubenswrapper[5043]: I1125 08:13:11.671653 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwzjt\" (UniqueName: \"kubernetes.io/projected/30018f1d-11c7-4b61-b5a3-60b8f9848f29-kube-api-access-kwzjt\") pod \"30018f1d-11c7-4b61-b5a3-60b8f9848f29\" (UID: \"30018f1d-11c7-4b61-b5a3-60b8f9848f29\") " Nov 25 08:13:11 crc kubenswrapper[5043]: I1125 08:13:11.679854 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30018f1d-11c7-4b61-b5a3-60b8f9848f29-kube-api-access-kwzjt" (OuterVolumeSpecName: "kube-api-access-kwzjt") pod "30018f1d-11c7-4b61-b5a3-60b8f9848f29" (UID: "30018f1d-11c7-4b61-b5a3-60b8f9848f29"). InnerVolumeSpecName "kube-api-access-kwzjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:13:11 crc kubenswrapper[5043]: I1125 08:13:11.774892 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwzjt\" (UniqueName: \"kubernetes.io/projected/30018f1d-11c7-4b61-b5a3-60b8f9848f29-kube-api-access-kwzjt\") on node \"crc\" DevicePath \"\"" Nov 25 08:13:11 crc kubenswrapper[5043]: I1125 08:13:11.900079 5043 generic.go:334] "Generic (PLEG): container finished" podID="30018f1d-11c7-4b61-b5a3-60b8f9848f29" containerID="aa63945023a730072548b5ce371eec5b064d761ffe23644a59b69c652b675ec3" exitCode=0 Nov 25 08:13:11 crc kubenswrapper[5043]: I1125 08:13:11.900132 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p" Nov 25 08:13:11 crc kubenswrapper[5043]: I1125 08:13:11.900189 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p" event={"ID":"30018f1d-11c7-4b61-b5a3-60b8f9848f29","Type":"ContainerDied","Data":"aa63945023a730072548b5ce371eec5b064d761ffe23644a59b69c652b675ec3"} Nov 25 08:13:11 crc kubenswrapper[5043]: I1125 08:13:11.900247 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p" event={"ID":"30018f1d-11c7-4b61-b5a3-60b8f9848f29","Type":"ContainerDied","Data":"933909587dfd06f739666d8d4857d80473c18fef854b0a195bdf50301889dbd9"} Nov 25 08:13:11 crc kubenswrapper[5043]: I1125 08:13:11.900275 5043 scope.go:117] "RemoveContainer" containerID="aa63945023a730072548b5ce371eec5b064d761ffe23644a59b69c652b675ec3" Nov 25 08:13:11 crc kubenswrapper[5043]: I1125 08:13:11.900731 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g8hd9" podUID="1678e66e-fcb2-4cbb-b49b-0d7d485eedd3" containerName="registry-server" containerID="cri-o://c450ee18a840aac909544933be071fc212eee480daedf7c85e5768309b4745b9" gracePeriod=2 Nov 25 08:13:11 crc kubenswrapper[5043]: I1125 08:13:11.947795 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p"] Nov 25 08:13:11 crc kubenswrapper[5043]: I1125 08:13:11.954764 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-x2t4p"] Nov 25 08:13:11 crc kubenswrapper[5043]: I1125 08:13:11.961142 5043 scope.go:117] "RemoveContainer" containerID="aa63945023a730072548b5ce371eec5b064d761ffe23644a59b69c652b675ec3" Nov 25 08:13:11 crc kubenswrapper[5043]: E1125 08:13:11.961601 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa63945023a730072548b5ce371eec5b064d761ffe23644a59b69c652b675ec3\": container with ID starting with aa63945023a730072548b5ce371eec5b064d761ffe23644a59b69c652b675ec3 not found: ID does not exist" containerID="aa63945023a730072548b5ce371eec5b064d761ffe23644a59b69c652b675ec3" Nov 25 08:13:11 crc kubenswrapper[5043]: I1125 08:13:11.961657 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa63945023a730072548b5ce371eec5b064d761ffe23644a59b69c652b675ec3"} err="failed to get container status \"aa63945023a730072548b5ce371eec5b064d761ffe23644a59b69c652b675ec3\": rpc error: code = NotFound desc = could not find container \"aa63945023a730072548b5ce371eec5b064d761ffe23644a59b69c652b675ec3\": container with ID starting with aa63945023a730072548b5ce371eec5b064d761ffe23644a59b69c652b675ec3 not found: ID does not exist" Nov 25 08:13:12 crc kubenswrapper[5043]: I1125 08:13:12.409176 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8hd9" Nov 25 08:13:12 crc kubenswrapper[5043]: I1125 08:13:12.488870 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-utilities\") pod \"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3\" (UID: \"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3\") " Nov 25 08:13:12 crc kubenswrapper[5043]: I1125 08:13:12.488955 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4x2v\" (UniqueName: \"kubernetes.io/projected/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-kube-api-access-j4x2v\") pod \"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3\" (UID: \"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3\") " Nov 25 08:13:12 crc kubenswrapper[5043]: I1125 08:13:12.489059 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-catalog-content\") pod \"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3\" (UID: \"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3\") " Nov 25 08:13:12 crc kubenswrapper[5043]: I1125 08:13:12.489972 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-utilities" (OuterVolumeSpecName: "utilities") pod "1678e66e-fcb2-4cbb-b49b-0d7d485eedd3" (UID: "1678e66e-fcb2-4cbb-b49b-0d7d485eedd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:13:12 crc kubenswrapper[5043]: I1125 08:13:12.494243 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-kube-api-access-j4x2v" (OuterVolumeSpecName: "kube-api-access-j4x2v") pod "1678e66e-fcb2-4cbb-b49b-0d7d485eedd3" (UID: "1678e66e-fcb2-4cbb-b49b-0d7d485eedd3"). InnerVolumeSpecName "kube-api-access-j4x2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:13:12 crc kubenswrapper[5043]: I1125 08:13:12.593168 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:13:12 crc kubenswrapper[5043]: I1125 08:13:12.593254 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4x2v\" (UniqueName: \"kubernetes.io/projected/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-kube-api-access-j4x2v\") on node \"crc\" DevicePath \"\"" Nov 25 08:13:12 crc kubenswrapper[5043]: I1125 08:13:12.911640 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8hd9" event={"ID":"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3","Type":"ContainerDied","Data":"c450ee18a840aac909544933be071fc212eee480daedf7c85e5768309b4745b9"} Nov 25 08:13:12 crc kubenswrapper[5043]: I1125 08:13:12.911706 5043 scope.go:117] "RemoveContainer" containerID="c450ee18a840aac909544933be071fc212eee480daedf7c85e5768309b4745b9" Nov 25 08:13:12 crc kubenswrapper[5043]: I1125 08:13:12.911725 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8hd9" Nov 25 08:13:12 crc kubenswrapper[5043]: I1125 08:13:12.911652 5043 generic.go:334] "Generic (PLEG): container finished" podID="1678e66e-fcb2-4cbb-b49b-0d7d485eedd3" containerID="c450ee18a840aac909544933be071fc212eee480daedf7c85e5768309b4745b9" exitCode=0 Nov 25 08:13:12 crc kubenswrapper[5043]: I1125 08:13:12.911870 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8hd9" event={"ID":"1678e66e-fcb2-4cbb-b49b-0d7d485eedd3","Type":"ContainerDied","Data":"652af379a334e51f732ced505df037cd932cb3dba2149783ef5639febafbe1e5"} Nov 25 08:13:13 crc kubenswrapper[5043]: I1125 08:13:13.125040 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30018f1d-11c7-4b61-b5a3-60b8f9848f29" path="/var/lib/kubelet/pods/30018f1d-11c7-4b61-b5a3-60b8f9848f29/volumes" Nov 25 08:13:13 crc kubenswrapper[5043]: I1125 08:13:13.149620 5043 scope.go:117] "RemoveContainer" containerID="9d4d81d7bc29b2912afc88f02972a17b6a00cc49455a88ffbd5ab5d5d29007df" Nov 25 08:13:13 crc kubenswrapper[5043]: I1125 08:13:13.465128 5043 scope.go:117] "RemoveContainer" containerID="4c39a2c6c0fc8999be81f1bc512e222ae93d0af8492ff4660e549dbe92acda71" Nov 25 08:13:13 crc kubenswrapper[5043]: I1125 08:13:13.501727 5043 scope.go:117] "RemoveContainer" containerID="c450ee18a840aac909544933be071fc212eee480daedf7c85e5768309b4745b9" Nov 25 08:13:13 crc kubenswrapper[5043]: E1125 08:13:13.502436 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c450ee18a840aac909544933be071fc212eee480daedf7c85e5768309b4745b9\": container with ID starting with c450ee18a840aac909544933be071fc212eee480daedf7c85e5768309b4745b9 not found: ID does not exist" containerID="c450ee18a840aac909544933be071fc212eee480daedf7c85e5768309b4745b9" Nov 25 08:13:13 crc kubenswrapper[5043]: I1125 08:13:13.502468 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c450ee18a840aac909544933be071fc212eee480daedf7c85e5768309b4745b9"} err="failed to get container status \"c450ee18a840aac909544933be071fc212eee480daedf7c85e5768309b4745b9\": rpc error: code = NotFound desc = could not find container \"c450ee18a840aac909544933be071fc212eee480daedf7c85e5768309b4745b9\": container with ID starting with c450ee18a840aac909544933be071fc212eee480daedf7c85e5768309b4745b9 not found: ID does not exist" Nov 25 08:13:13 crc kubenswrapper[5043]: I1125 08:13:13.502491 5043 scope.go:117] "RemoveContainer" containerID="9d4d81d7bc29b2912afc88f02972a17b6a00cc49455a88ffbd5ab5d5d29007df" Nov 25 08:13:13 crc kubenswrapper[5043]: E1125 08:13:13.502811 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d4d81d7bc29b2912afc88f02972a17b6a00cc49455a88ffbd5ab5d5d29007df\": container with ID starting with 9d4d81d7bc29b2912afc88f02972a17b6a00cc49455a88ffbd5ab5d5d29007df not found: ID does not exist" containerID="9d4d81d7bc29b2912afc88f02972a17b6a00cc49455a88ffbd5ab5d5d29007df" Nov 25 08:13:13 crc kubenswrapper[5043]: I1125 08:13:13.502836 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d4d81d7bc29b2912afc88f02972a17b6a00cc49455a88ffbd5ab5d5d29007df"} err="failed to get container status \"9d4d81d7bc29b2912afc88f02972a17b6a00cc49455a88ffbd5ab5d5d29007df\": rpc error: code = NotFound desc = could not find container \"9d4d81d7bc29b2912afc88f02972a17b6a00cc49455a88ffbd5ab5d5d29007df\": container with ID starting with 9d4d81d7bc29b2912afc88f02972a17b6a00cc49455a88ffbd5ab5d5d29007df not found: ID does not exist" Nov 25 08:13:13 crc kubenswrapper[5043]: I1125 08:13:13.502851 5043 scope.go:117] "RemoveContainer" containerID="4c39a2c6c0fc8999be81f1bc512e222ae93d0af8492ff4660e549dbe92acda71" Nov 25 08:13:13 crc kubenswrapper[5043]: E1125 08:13:13.503158 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c39a2c6c0fc8999be81f1bc512e222ae93d0af8492ff4660e549dbe92acda71\": container with ID starting with 4c39a2c6c0fc8999be81f1bc512e222ae93d0af8492ff4660e549dbe92acda71 not found: ID does not exist" containerID="4c39a2c6c0fc8999be81f1bc512e222ae93d0af8492ff4660e549dbe92acda71" Nov 25 08:13:13 crc kubenswrapper[5043]: I1125 08:13:13.503181 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c39a2c6c0fc8999be81f1bc512e222ae93d0af8492ff4660e549dbe92acda71"} err="failed to get container status \"4c39a2c6c0fc8999be81f1bc512e222ae93d0af8492ff4660e549dbe92acda71\": rpc error: code = NotFound desc = could not find container \"4c39a2c6c0fc8999be81f1bc512e222ae93d0af8492ff4660e549dbe92acda71\": container with ID starting with 4c39a2c6c0fc8999be81f1bc512e222ae93d0af8492ff4660e549dbe92acda71 not found: ID does not exist" Nov 25 08:13:13 crc kubenswrapper[5043]: I1125 08:13:13.718156 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1678e66e-fcb2-4cbb-b49b-0d7d485eedd3" (UID: "1678e66e-fcb2-4cbb-b49b-0d7d485eedd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:13:13 crc kubenswrapper[5043]: I1125 08:13:13.818479 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:13:13 crc kubenswrapper[5043]: I1125 08:13:13.845809 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8hd9"] Nov 25 08:13:13 crc kubenswrapper[5043]: I1125 08:13:13.854091 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8hd9"] Nov 25 08:13:14 crc kubenswrapper[5043]: I1125 08:13:14.974257 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1678e66e-fcb2-4cbb-b49b-0d7d485eedd3" path="/var/lib/kubelet/pods/1678e66e-fcb2-4cbb-b49b-0d7d485eedd3/volumes" Nov 25 08:13:45 crc kubenswrapper[5043]: I1125 08:13:45.962429 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw"] Nov 25 08:13:45 crc kubenswrapper[5043]: E1125 08:13:45.965019 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30018f1d-11c7-4b61-b5a3-60b8f9848f29" containerName="operator" Nov 25 08:13:45 crc kubenswrapper[5043]: I1125 08:13:45.965190 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="30018f1d-11c7-4b61-b5a3-60b8f9848f29" containerName="operator" Nov 25 08:13:45 crc kubenswrapper[5043]: E1125 08:13:45.965314 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1678e66e-fcb2-4cbb-b49b-0d7d485eedd3" containerName="extract-utilities" Nov 25 08:13:45 crc kubenswrapper[5043]: I1125 08:13:45.965422 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="1678e66e-fcb2-4cbb-b49b-0d7d485eedd3" containerName="extract-utilities" Nov 25 08:13:45 crc kubenswrapper[5043]: E1125 08:13:45.965583 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1678e66e-fcb2-4cbb-b49b-0d7d485eedd3" containerName="extract-content" Nov 25 08:13:45 crc kubenswrapper[5043]: I1125 08:13:45.965724 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="1678e66e-fcb2-4cbb-b49b-0d7d485eedd3" containerName="extract-content" Nov 25 08:13:45 crc kubenswrapper[5043]: E1125 08:13:45.965896 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1678e66e-fcb2-4cbb-b49b-0d7d485eedd3" containerName="registry-server" Nov 25 08:13:45 crc kubenswrapper[5043]: I1125 08:13:45.966011 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="1678e66e-fcb2-4cbb-b49b-0d7d485eedd3" containerName="registry-server" Nov 25 08:13:45 crc kubenswrapper[5043]: I1125 08:13:45.966417 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="30018f1d-11c7-4b61-b5a3-60b8f9848f29" containerName="operator" Nov 25 08:13:45 crc kubenswrapper[5043]: I1125 08:13:45.966561 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="1678e66e-fcb2-4cbb-b49b-0d7d485eedd3" containerName="registry-server" Nov 25 08:13:45 crc kubenswrapper[5043]: I1125 08:13:45.968299 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" Nov 25 08:13:45 crc kubenswrapper[5043]: I1125 08:13:45.974002 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw"] Nov 25 08:13:46 crc kubenswrapper[5043]: I1125 08:13:46.018685 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5hcs\" (UniqueName: \"kubernetes.io/projected/8cfc66d8-27da-4bce-9a5f-62a019bfd836-kube-api-access-d5hcs\") pod \"test-operator-controller-manager-556c5c9c9c-82qgw\" (UID: \"8cfc66d8-27da-4bce-9a5f-62a019bfd836\") " pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" Nov 25 08:13:46 crc kubenswrapper[5043]: I1125 08:13:46.119349 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5hcs\" (UniqueName: \"kubernetes.io/projected/8cfc66d8-27da-4bce-9a5f-62a019bfd836-kube-api-access-d5hcs\") pod \"test-operator-controller-manager-556c5c9c9c-82qgw\" (UID: \"8cfc66d8-27da-4bce-9a5f-62a019bfd836\") " pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" Nov 25 08:13:46 crc kubenswrapper[5043]: I1125 08:13:46.140716 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5hcs\" (UniqueName: \"kubernetes.io/projected/8cfc66d8-27da-4bce-9a5f-62a019bfd836-kube-api-access-d5hcs\") pod \"test-operator-controller-manager-556c5c9c9c-82qgw\" (UID: \"8cfc66d8-27da-4bce-9a5f-62a019bfd836\") " pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" Nov 25 08:13:46 crc kubenswrapper[5043]: I1125 08:13:46.300975 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" Nov 25 08:13:46 crc kubenswrapper[5043]: I1125 08:13:46.793381 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw"] Nov 25 08:13:47 crc kubenswrapper[5043]: I1125 08:13:47.241770 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" event={"ID":"8cfc66d8-27da-4bce-9a5f-62a019bfd836","Type":"ContainerStarted","Data":"3e8a69a1cbd4427caa5620f3b0318fb265835cdb8dc09676d30b7560de2006e6"} Nov 25 08:13:49 crc kubenswrapper[5043]: I1125 08:13:49.285002 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" event={"ID":"8cfc66d8-27da-4bce-9a5f-62a019bfd836","Type":"ContainerStarted","Data":"a601a2924b9d636ce78e06cb1143c518646cfa1be008e75927a842d86036405f"} Nov 25 08:13:50 crc kubenswrapper[5043]: I1125 08:13:50.294725 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" event={"ID":"8cfc66d8-27da-4bce-9a5f-62a019bfd836","Type":"ContainerStarted","Data":"3f03f7c25c6c190d290d6677eac4fff428f04d2dd15af53272178a80e5396d7c"} Nov 25 08:13:50 crc kubenswrapper[5043]: I1125 08:13:50.295446 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" Nov 25 08:13:50 crc kubenswrapper[5043]: I1125 08:13:50.310431 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" podStartSLOduration=3.124589091 podStartE2EDuration="5.310408161s" podCreationTimestamp="2025-11-25 08:13:45 +0000 UTC" firstStartedPulling="2025-11-25 08:13:46.797820567 +0000 UTC m=+3490.966016288" lastFinishedPulling="2025-11-25 08:13:48.983639637 +0000 UTC m=+3493.151835358" observedRunningTime="2025-11-25 08:13:50.308818288 +0000 UTC m=+3494.477014009" watchObservedRunningTime="2025-11-25 08:13:50.310408161 +0000 UTC m=+3494.478603882" Nov 25 08:13:56 crc kubenswrapper[5043]: I1125 08:13:56.303085 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" Nov 25 08:13:56 crc kubenswrapper[5043]: I1125 08:13:56.353629 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c"] Nov 25 08:13:56 crc kubenswrapper[5043]: I1125 08:13:56.354148 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" podUID="01b420eb-b6d7-4534-9bdb-967c7a7c163f" containerName="manager" containerID="cri-o://3ac19bcf572e7516f131d1f6e2edbd9ef287f6773f873685a1d448c89ed2dbf7" gracePeriod=10 Nov 25 08:13:56 crc kubenswrapper[5043]: I1125 08:13:56.354217 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" podUID="01b420eb-b6d7-4534-9bdb-967c7a7c163f" containerName="kube-rbac-proxy" containerID="cri-o://6a872f9088ab0c0b62fe27c8852a4d1e384ec136e4731ba6f7bec4bfd6b67a29" gracePeriod=10 Nov 25 08:13:57 crc kubenswrapper[5043]: I1125 08:13:57.367635 5043 generic.go:334] "Generic (PLEG): container finished" podID="01b420eb-b6d7-4534-9bdb-967c7a7c163f" containerID="6a872f9088ab0c0b62fe27c8852a4d1e384ec136e4731ba6f7bec4bfd6b67a29" exitCode=0 Nov 25 08:13:57 crc kubenswrapper[5043]: I1125 08:13:57.368832 5043 generic.go:334] "Generic (PLEG): container finished" podID="01b420eb-b6d7-4534-9bdb-967c7a7c163f" containerID="3ac19bcf572e7516f131d1f6e2edbd9ef287f6773f873685a1d448c89ed2dbf7" exitCode=0 Nov 25 08:13:57 crc kubenswrapper[5043]: I1125 08:13:57.367688 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" event={"ID":"01b420eb-b6d7-4534-9bdb-967c7a7c163f","Type":"ContainerDied","Data":"6a872f9088ab0c0b62fe27c8852a4d1e384ec136e4731ba6f7bec4bfd6b67a29"} Nov 25 08:13:57 crc kubenswrapper[5043]: I1125 08:13:57.369061 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" event={"ID":"01b420eb-b6d7-4534-9bdb-967c7a7c163f","Type":"ContainerDied","Data":"3ac19bcf572e7516f131d1f6e2edbd9ef287f6773f873685a1d448c89ed2dbf7"} Nov 25 08:13:58 crc kubenswrapper[5043]: I1125 08:13:58.751757 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" Nov 25 08:13:58 crc kubenswrapper[5043]: I1125 08:13:58.885208 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d97q2\" (UniqueName: \"kubernetes.io/projected/01b420eb-b6d7-4534-9bdb-967c7a7c163f-kube-api-access-d97q2\") pod \"01b420eb-b6d7-4534-9bdb-967c7a7c163f\" (UID: \"01b420eb-b6d7-4534-9bdb-967c7a7c163f\") " Nov 25 08:13:58 crc kubenswrapper[5043]: I1125 08:13:58.890781 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b420eb-b6d7-4534-9bdb-967c7a7c163f-kube-api-access-d97q2" (OuterVolumeSpecName: "kube-api-access-d97q2") pod "01b420eb-b6d7-4534-9bdb-967c7a7c163f" (UID: "01b420eb-b6d7-4534-9bdb-967c7a7c163f"). InnerVolumeSpecName "kube-api-access-d97q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:13:58 crc kubenswrapper[5043]: I1125 08:13:58.988136 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d97q2\" (UniqueName: \"kubernetes.io/projected/01b420eb-b6d7-4534-9bdb-967c7a7c163f-kube-api-access-d97q2\") on node \"crc\" DevicePath \"\"" Nov 25 08:13:59 crc kubenswrapper[5043]: I1125 08:13:59.401686 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" event={"ID":"01b420eb-b6d7-4534-9bdb-967c7a7c163f","Type":"ContainerDied","Data":"1a900e4f6ad94fe0990da84d61f76437c553e6c6adbe90db138038663731ff36"} Nov 25 08:13:59 crc kubenswrapper[5043]: I1125 08:13:59.401738 5043 scope.go:117] "RemoveContainer" containerID="6a872f9088ab0c0b62fe27c8852a4d1e384ec136e4731ba6f7bec4bfd6b67a29" Nov 25 08:13:59 crc kubenswrapper[5043]: I1125 08:13:59.401878 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c" Nov 25 08:13:59 crc kubenswrapper[5043]: I1125 08:13:59.427030 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c"] Nov 25 08:13:59 crc kubenswrapper[5043]: I1125 08:13:59.457528 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-vqd9c"] Nov 25 08:13:59 crc kubenswrapper[5043]: I1125 08:13:59.459744 5043 scope.go:117] "RemoveContainer" containerID="3ac19bcf572e7516f131d1f6e2edbd9ef287f6773f873685a1d448c89ed2dbf7" Nov 25 08:14:00 crc kubenswrapper[5043]: I1125 08:14:00.973976 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01b420eb-b6d7-4534-9bdb-967c7a7c163f" path="/var/lib/kubelet/pods/01b420eb-b6d7-4534-9bdb-967c7a7c163f/volumes" Nov 25 08:14:47 crc kubenswrapper[5043]: I1125 08:14:47.276532 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:14:47 crc kubenswrapper[5043]: I1125 08:14:47.277588 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.170119 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws"] Nov 25 08:15:00 crc kubenswrapper[5043]: E1125 08:15:00.171097 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b420eb-b6d7-4534-9bdb-967c7a7c163f" containerName="manager" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.171112 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b420eb-b6d7-4534-9bdb-967c7a7c163f" containerName="manager" Nov 25 08:15:00 crc kubenswrapper[5043]: E1125 08:15:00.171130 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b420eb-b6d7-4534-9bdb-967c7a7c163f" containerName="kube-rbac-proxy" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.171141 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b420eb-b6d7-4534-9bdb-967c7a7c163f" containerName="kube-rbac-proxy" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.171380 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="01b420eb-b6d7-4534-9bdb-967c7a7c163f" containerName="manager" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.171403 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="01b420eb-b6d7-4534-9bdb-967c7a7c163f" containerName="kube-rbac-proxy" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.172177 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.180208 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.180380 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.183255 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws"] Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.229959 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-secret-volume\") pod \"collect-profiles-29400975-g6rws\" (UID: \"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.230028 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7kvt\" (UniqueName: \"kubernetes.io/projected/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-kube-api-access-r7kvt\") pod \"collect-profiles-29400975-g6rws\" (UID: \"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.230228 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-config-volume\") pod \"collect-profiles-29400975-g6rws\" (UID: \"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.331821 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-config-volume\") pod \"collect-profiles-29400975-g6rws\" (UID: \"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.332179 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-secret-volume\") pod \"collect-profiles-29400975-g6rws\" (UID: \"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.332258 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7kvt\" (UniqueName: \"kubernetes.io/projected/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-kube-api-access-r7kvt\") pod \"collect-profiles-29400975-g6rws\" (UID: \"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.333171 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-config-volume\") pod \"collect-profiles-29400975-g6rws\" (UID: \"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.343233 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-secret-volume\") pod \"collect-profiles-29400975-g6rws\" (UID: \"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.347450 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7kvt\" (UniqueName: \"kubernetes.io/projected/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-kube-api-access-r7kvt\") pod \"collect-profiles-29400975-g6rws\" (UID: \"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.496784 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws" Nov 25 08:15:00 crc kubenswrapper[5043]: I1125 08:15:00.941398 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws"] Nov 25 08:15:01 crc kubenswrapper[5043]: I1125 08:15:01.010526 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws" event={"ID":"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e","Type":"ContainerStarted","Data":"340c0024ea4dd6898d1e2bb98a3346218791d274c6a9de037eca04e8c39f84b2"} Nov 25 08:15:02 crc kubenswrapper[5043]: I1125 08:15:02.020963 5043 generic.go:334] "Generic (PLEG): container finished" podID="eecb0dbf-d941-45ee-8783-ba3ce9b2e32e" containerID="64f0a6c083cd5115f9121daab497cb058bcc4deddd3b71a6459c15004d14517d" exitCode=0 Nov 25 08:15:02 crc kubenswrapper[5043]: I1125 08:15:02.021058 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws" event={"ID":"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e","Type":"ContainerDied","Data":"64f0a6c083cd5115f9121daab497cb058bcc4deddd3b71a6459c15004d14517d"} Nov 25 08:15:03 crc kubenswrapper[5043]: I1125 08:15:03.365442 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws" Nov 25 08:15:03 crc kubenswrapper[5043]: I1125 08:15:03.391545 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-config-volume\") pod \"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e\" (UID: \"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e\") " Nov 25 08:15:03 crc kubenswrapper[5043]: I1125 08:15:03.391622 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7kvt\" (UniqueName: \"kubernetes.io/projected/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-kube-api-access-r7kvt\") pod \"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e\" (UID: \"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e\") " Nov 25 08:15:03 crc kubenswrapper[5043]: I1125 08:15:03.391643 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-secret-volume\") pod \"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e\" (UID: \"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e\") " Nov 25 08:15:03 crc kubenswrapper[5043]: I1125 08:15:03.392359 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-config-volume" (OuterVolumeSpecName: "config-volume") pod "eecb0dbf-d941-45ee-8783-ba3ce9b2e32e" (UID: "eecb0dbf-d941-45ee-8783-ba3ce9b2e32e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:15:03 crc kubenswrapper[5043]: I1125 08:15:03.397727 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eecb0dbf-d941-45ee-8783-ba3ce9b2e32e" (UID: "eecb0dbf-d941-45ee-8783-ba3ce9b2e32e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:15:03 crc kubenswrapper[5043]: I1125 08:15:03.400394 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-kube-api-access-r7kvt" (OuterVolumeSpecName: "kube-api-access-r7kvt") pod "eecb0dbf-d941-45ee-8783-ba3ce9b2e32e" (UID: "eecb0dbf-d941-45ee-8783-ba3ce9b2e32e"). InnerVolumeSpecName "kube-api-access-r7kvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:15:03 crc kubenswrapper[5043]: I1125 08:15:03.493651 5043 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:03 crc kubenswrapper[5043]: I1125 08:15:03.493686 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7kvt\" (UniqueName: \"kubernetes.io/projected/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-kube-api-access-r7kvt\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:03 crc kubenswrapper[5043]: I1125 08:15:03.493697 5043 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:15:04 crc kubenswrapper[5043]: I1125 08:15:04.040126 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws" event={"ID":"eecb0dbf-d941-45ee-8783-ba3ce9b2e32e","Type":"ContainerDied","Data":"340c0024ea4dd6898d1e2bb98a3346218791d274c6a9de037eca04e8c39f84b2"} Nov 25 08:15:04 crc kubenswrapper[5043]: I1125 08:15:04.040504 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="340c0024ea4dd6898d1e2bb98a3346218791d274c6a9de037eca04e8c39f84b2" Nov 25 08:15:04 crc kubenswrapper[5043]: I1125 08:15:04.040183 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws" Nov 25 08:15:04 crc kubenswrapper[5043]: I1125 08:15:04.452231 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7"] Nov 25 08:15:04 crc kubenswrapper[5043]: I1125 08:15:04.462512 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400930-9plc7"] Nov 25 08:15:04 crc kubenswrapper[5043]: I1125 08:15:04.974427 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31bedd18-64d9-4295-a081-c458536855b0" path="/var/lib/kubelet/pods/31bedd18-64d9-4295-a081-c458536855b0/volumes" Nov 25 08:15:17 crc kubenswrapper[5043]: I1125 08:15:17.276275 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:15:17 crc kubenswrapper[5043]: I1125 08:15:17.276857 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:15:47 crc kubenswrapper[5043]: I1125 08:15:47.276370 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:15:47 crc kubenswrapper[5043]: I1125 08:15:47.277368 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:15:47 crc kubenswrapper[5043]: I1125 08:15:47.277434 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 08:15:47 crc kubenswrapper[5043]: I1125 08:15:47.278375 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:15:47 crc kubenswrapper[5043]: I1125 08:15:47.278448 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" gracePeriod=600 Nov 25 08:15:47 crc kubenswrapper[5043]: E1125 08:15:47.405511 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:15:47 crc kubenswrapper[5043]: I1125 08:15:47.449058 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" exitCode=0 Nov 25 08:15:47 crc kubenswrapper[5043]: I1125 08:15:47.449116 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf"} Nov 25 08:15:47 crc kubenswrapper[5043]: I1125 08:15:47.449162 5043 scope.go:117] "RemoveContainer" containerID="18343daeda377d7ada554d8d6723fd48b9d69c9bb0127c1bc82ab8875fb2058e" Nov 25 08:15:47 crc kubenswrapper[5043]: I1125 08:15:47.450123 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:15:47 crc kubenswrapper[5043]: E1125 08:15:47.450468 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:15:52 crc kubenswrapper[5043]: I1125 08:15:52.821959 5043 scope.go:117] "RemoveContainer" containerID="8d1a52a9331fbe4ac0b618c502eef7712bebf7db0c8ee7edd606f98ea6047545" Nov 25 08:16:00 crc kubenswrapper[5043]: I1125 08:16:00.962737 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:16:00 crc kubenswrapper[5043]: E1125 08:16:00.963497 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.284422 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Nov 25 08:16:06 crc kubenswrapper[5043]: E1125 08:16:06.285381 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eecb0dbf-d941-45ee-8783-ba3ce9b2e32e" containerName="collect-profiles" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.285394 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="eecb0dbf-d941-45ee-8783-ba3ce9b2e32e" containerName="collect-profiles" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.285569 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="eecb0dbf-d941-45ee-8783-ba3ce9b2e32e" containerName="collect-profiles" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.286266 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.289450 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.289679 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.289825 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.289968 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-2tk7t" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.300634 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.356106 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6515f5fe-fd1f-4786-8374-8af7b394831b-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.356158 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6515f5fe-fd1f-4786-8374-8af7b394831b-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.356253 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.458327 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.458365 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.458411 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6515f5fe-fd1f-4786-8374-8af7b394831b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.458464 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6515f5fe-fd1f-4786-8374-8af7b394831b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.458489 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.458646 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.458678 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxwrw\" (UniqueName: \"kubernetes.io/projected/6515f5fe-fd1f-4786-8374-8af7b394831b-kube-api-access-nxwrw\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.458735 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6515f5fe-fd1f-4786-8374-8af7b394831b-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.458762 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6515f5fe-fd1f-4786-8374-8af7b394831b-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.458804 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.459540 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6515f5fe-fd1f-4786-8374-8af7b394831b-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.459878 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6515f5fe-fd1f-4786-8374-8af7b394831b-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.467157 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.560598 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.560688 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6515f5fe-fd1f-4786-8374-8af7b394831b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.560745 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6515f5fe-fd1f-4786-8374-8af7b394831b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.560772 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.560841 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.560861 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxwrw\" (UniqueName: \"kubernetes.io/projected/6515f5fe-fd1f-4786-8374-8af7b394831b-kube-api-access-nxwrw\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.560929 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.561479 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.562098 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6515f5fe-fd1f-4786-8374-8af7b394831b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.565264 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6515f5fe-fd1f-4786-8374-8af7b394831b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.566263 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.582549 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.582841 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.593479 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxwrw\" (UniqueName: \"kubernetes.io/projected/6515f5fe-fd1f-4786-8374-8af7b394831b-kube-api-access-nxwrw\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.599429 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:06 crc kubenswrapper[5043]: I1125 08:16:06.606628 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Nov 25 08:16:07 crc kubenswrapper[5043]: I1125 08:16:07.148473 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Nov 25 08:16:07 crc kubenswrapper[5043]: I1125 08:16:07.640044 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"6515f5fe-fd1f-4786-8374-8af7b394831b","Type":"ContainerStarted","Data":"f1882dafe9f985ed07a09d6a7656e259f7269b73b1fb09e38994b7277a5d93a4"} Nov 25 08:16:13 crc kubenswrapper[5043]: I1125 08:16:13.963431 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:16:13 crc kubenswrapper[5043]: E1125 08:16:13.964286 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:16:26 crc kubenswrapper[5043]: I1125 08:16:26.969102 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:16:26 crc kubenswrapper[5043]: E1125 08:16:26.969866 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:16:38 crc kubenswrapper[5043]: E1125 08:16:38.320345 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 25 08:16:38 crc kubenswrapper[5043]: E1125 08:16:38.321166 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nxwrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-full_openstack(6515f5fe-fd1f-4786-8374-8af7b394831b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 08:16:38 crc kubenswrapper[5043]: E1125 08:16:38.322421 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-full" podUID="6515f5fe-fd1f-4786-8374-8af7b394831b" Nov 25 08:16:38 crc kubenswrapper[5043]: E1125 08:16:38.947303 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest-s00-full" podUID="6515f5fe-fd1f-4786-8374-8af7b394831b" Nov 25 08:16:38 crc kubenswrapper[5043]: I1125 08:16:38.963730 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:16:38 crc kubenswrapper[5043]: E1125 08:16:38.964038 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:16:50 crc kubenswrapper[5043]: I1125 08:16:50.964434 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:16:50 crc kubenswrapper[5043]: E1125 08:16:50.965405 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:16:50 crc kubenswrapper[5043]: I1125 08:16:50.965779 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:16:51 crc kubenswrapper[5043]: I1125 08:16:51.818546 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 25 08:16:53 crc kubenswrapper[5043]: I1125 08:16:53.083681 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"6515f5fe-fd1f-4786-8374-8af7b394831b","Type":"ContainerStarted","Data":"efa09f80e8127de221ce9786abd45bde5663f0d3cebb88b5d8f391910882cc7e"} Nov 25 08:16:53 crc kubenswrapper[5043]: I1125 08:16:53.106174 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-full" podStartSLOduration=3.44361786 podStartE2EDuration="48.106134665s" podCreationTimestamp="2025-11-25 08:16:05 +0000 UTC" firstStartedPulling="2025-11-25 08:16:07.152806115 +0000 UTC m=+3631.321001836" lastFinishedPulling="2025-11-25 08:16:51.81532288 +0000 UTC m=+3675.983518641" observedRunningTime="2025-11-25 08:16:53.105117507 +0000 UTC m=+3677.273313238" watchObservedRunningTime="2025-11-25 08:16:53.106134665 +0000 UTC m=+3677.274330386" Nov 25 08:17:01 crc kubenswrapper[5043]: I1125 08:17:01.964133 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:17:01 crc kubenswrapper[5043]: E1125 08:17:01.966729 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:17:12 crc kubenswrapper[5043]: I1125 08:17:12.963422 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:17:12 crc kubenswrapper[5043]: E1125 08:17:12.964703 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:17:25 crc kubenswrapper[5043]: I1125 08:17:25.963350 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:17:25 crc kubenswrapper[5043]: E1125 08:17:25.964176 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:17:37 crc kubenswrapper[5043]: I1125 08:17:37.963189 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:17:37 crc kubenswrapper[5043]: E1125 08:17:37.964062 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:17:49 crc kubenswrapper[5043]: I1125 08:17:49.963856 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:17:49 crc kubenswrapper[5043]: E1125 08:17:49.964970 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:17:52 crc kubenswrapper[5043]: I1125 08:17:52.898810 5043 scope.go:117] "RemoveContainer" containerID="4617dea3d893382e60d87f26af135fc92cce510d637c65acd3fb14b0e75afe7b" Nov 25 08:17:52 crc kubenswrapper[5043]: I1125 08:17:52.927543 5043 scope.go:117] "RemoveContainer" containerID="5b8209863989722abcaaf44b83afe4b1a73908404bf5f3d43ed9e09d3f3e263b" Nov 25 08:17:52 crc kubenswrapper[5043]: I1125 08:17:52.982870 5043 scope.go:117] "RemoveContainer" containerID="848247863c9c1144f746b74cace5644ccdb51fe0f8de505b713c20f990de52a7" Nov 25 08:18:01 crc kubenswrapper[5043]: I1125 08:18:01.963335 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:18:01 crc kubenswrapper[5043]: E1125 08:18:01.965657 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:18:14 crc kubenswrapper[5043]: I1125 08:18:14.963998 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:18:14 crc kubenswrapper[5043]: E1125 08:18:14.964781 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:18:19 crc kubenswrapper[5043]: I1125 08:18:19.771164 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="a22a0679-f2ea-46b8-88f5-d010717699d1" containerName="galera" probeResult="failure" output="command timed out" Nov 25 08:18:19 crc kubenswrapper[5043]: I1125 08:18:19.772491 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="a22a0679-f2ea-46b8-88f5-d010717699d1" containerName="galera" probeResult="failure" output="command timed out" Nov 25 08:18:26 crc kubenswrapper[5043]: I1125 08:18:26.973989 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:18:26 crc kubenswrapper[5043]: E1125 08:18:26.975788 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:18:37 crc kubenswrapper[5043]: I1125 08:18:37.964068 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:18:37 crc kubenswrapper[5043]: E1125 08:18:37.964923 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:18:52 crc kubenswrapper[5043]: I1125 08:18:52.969073 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:18:52 crc kubenswrapper[5043]: E1125 08:18:52.970164 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:19:03 crc kubenswrapper[5043]: I1125 08:19:03.962799 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:19:03 crc kubenswrapper[5043]: E1125 08:19:03.963548 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:19:18 crc kubenswrapper[5043]: I1125 08:19:18.962541 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:19:18 crc kubenswrapper[5043]: E1125 08:19:18.963334 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:19:31 crc kubenswrapper[5043]: I1125 08:19:31.963426 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:19:31 crc kubenswrapper[5043]: E1125 08:19:31.964274 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:19:37 crc kubenswrapper[5043]: I1125 08:19:37.038389 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r2glw"] Nov 25 08:19:37 crc kubenswrapper[5043]: I1125 08:19:37.042302 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2glw" Nov 25 08:19:37 crc kubenswrapper[5043]: I1125 08:19:37.057867 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r2glw"] Nov 25 08:19:37 crc kubenswrapper[5043]: I1125 08:19:37.132043 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fa808df-d102-4208-a962-8033764f3e82-utilities\") pod \"certified-operators-r2glw\" (UID: \"9fa808df-d102-4208-a962-8033764f3e82\") " pod="openshift-marketplace/certified-operators-r2glw" Nov 25 08:19:37 crc kubenswrapper[5043]: I1125 08:19:37.132145 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w698\" (UniqueName: \"kubernetes.io/projected/9fa808df-d102-4208-a962-8033764f3e82-kube-api-access-2w698\") pod \"certified-operators-r2glw\" (UID: \"9fa808df-d102-4208-a962-8033764f3e82\") " pod="openshift-marketplace/certified-operators-r2glw" Nov 25 08:19:37 crc kubenswrapper[5043]: I1125 08:19:37.132240 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fa808df-d102-4208-a962-8033764f3e82-catalog-content\") pod \"certified-operators-r2glw\" (UID: \"9fa808df-d102-4208-a962-8033764f3e82\") " pod="openshift-marketplace/certified-operators-r2glw" Nov 25 08:19:37 crc kubenswrapper[5043]: I1125 08:19:37.234644 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fa808df-d102-4208-a962-8033764f3e82-utilities\") pod \"certified-operators-r2glw\" (UID: \"9fa808df-d102-4208-a962-8033764f3e82\") " pod="openshift-marketplace/certified-operators-r2glw" Nov 25 08:19:37 crc kubenswrapper[5043]: I1125 08:19:37.234726 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w698\" (UniqueName: \"kubernetes.io/projected/9fa808df-d102-4208-a962-8033764f3e82-kube-api-access-2w698\") pod \"certified-operators-r2glw\" (UID: \"9fa808df-d102-4208-a962-8033764f3e82\") " pod="openshift-marketplace/certified-operators-r2glw" Nov 25 08:19:37 crc kubenswrapper[5043]: I1125 08:19:37.234772 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fa808df-d102-4208-a962-8033764f3e82-catalog-content\") pod \"certified-operators-r2glw\" (UID: \"9fa808df-d102-4208-a962-8033764f3e82\") " pod="openshift-marketplace/certified-operators-r2glw" Nov 25 08:19:37 crc kubenswrapper[5043]: I1125 08:19:37.235205 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fa808df-d102-4208-a962-8033764f3e82-utilities\") pod \"certified-operators-r2glw\" (UID: \"9fa808df-d102-4208-a962-8033764f3e82\") " pod="openshift-marketplace/certified-operators-r2glw" Nov 25 08:19:37 crc kubenswrapper[5043]: I1125 08:19:37.235377 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fa808df-d102-4208-a962-8033764f3e82-catalog-content\") pod \"certified-operators-r2glw\" (UID: \"9fa808df-d102-4208-a962-8033764f3e82\") " pod="openshift-marketplace/certified-operators-r2glw" Nov 25 08:19:37 crc kubenswrapper[5043]: I1125 08:19:37.253739 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w698\" (UniqueName: \"kubernetes.io/projected/9fa808df-d102-4208-a962-8033764f3e82-kube-api-access-2w698\") pod \"certified-operators-r2glw\" (UID: \"9fa808df-d102-4208-a962-8033764f3e82\") " pod="openshift-marketplace/certified-operators-r2glw" Nov 25 08:19:37 crc kubenswrapper[5043]: I1125 08:19:37.373956 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2glw" Nov 25 08:19:37 crc kubenswrapper[5043]: I1125 08:19:37.936187 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r2glw"] Nov 25 08:19:38 crc kubenswrapper[5043]: I1125 08:19:38.622550 5043 generic.go:334] "Generic (PLEG): container finished" podID="9fa808df-d102-4208-a962-8033764f3e82" containerID="133d2a1a1a9adc7261a2cc0255a4c44222f0a9297ec194684caa3cda36d93519" exitCode=0 Nov 25 08:19:38 crc kubenswrapper[5043]: I1125 08:19:38.622632 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2glw" event={"ID":"9fa808df-d102-4208-a962-8033764f3e82","Type":"ContainerDied","Data":"133d2a1a1a9adc7261a2cc0255a4c44222f0a9297ec194684caa3cda36d93519"} Nov 25 08:19:38 crc kubenswrapper[5043]: I1125 08:19:38.622909 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2glw" event={"ID":"9fa808df-d102-4208-a962-8033764f3e82","Type":"ContainerStarted","Data":"c89e5061527b0e565aee88c307fb32dab2ebebeb25fa33e51be5fd75acb74511"} Nov 25 08:19:41 crc kubenswrapper[5043]: I1125 08:19:41.654682 5043 generic.go:334] "Generic (PLEG): container finished" podID="9fa808df-d102-4208-a962-8033764f3e82" containerID="7c7c1f468429c24b09b0f4648ce6e455893d706ab3341c76d4ea3c3c742f19f2" exitCode=0 Nov 25 08:19:41 crc kubenswrapper[5043]: I1125 08:19:41.654943 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2glw" event={"ID":"9fa808df-d102-4208-a962-8033764f3e82","Type":"ContainerDied","Data":"7c7c1f468429c24b09b0f4648ce6e455893d706ab3341c76d4ea3c3c742f19f2"} Nov 25 08:19:43 crc kubenswrapper[5043]: I1125 08:19:43.962976 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:19:43 crc kubenswrapper[5043]: E1125 08:19:43.963974 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:19:44 crc kubenswrapper[5043]: I1125 08:19:44.680730 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2glw" event={"ID":"9fa808df-d102-4208-a962-8033764f3e82","Type":"ContainerStarted","Data":"63b56b22c91d40eee2cf36daa39391bae8727fe1502c83e4f12a0807afd74f92"} Nov 25 08:19:44 crc kubenswrapper[5043]: I1125 08:19:44.708002 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r2glw" podStartSLOduration=2.766123263 podStartE2EDuration="7.70798154s" podCreationTimestamp="2025-11-25 08:19:37 +0000 UTC" firstStartedPulling="2025-11-25 08:19:38.624282061 +0000 UTC m=+3842.792477792" lastFinishedPulling="2025-11-25 08:19:43.566140348 +0000 UTC m=+3847.734336069" observedRunningTime="2025-11-25 08:19:44.696595224 +0000 UTC m=+3848.864790955" watchObservedRunningTime="2025-11-25 08:19:44.70798154 +0000 UTC m=+3848.876177261" Nov 25 08:19:47 crc kubenswrapper[5043]: I1125 08:19:47.374823 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r2glw" Nov 25 08:19:47 crc kubenswrapper[5043]: I1125 08:19:47.375414 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r2glw" Nov 25 08:19:47 crc kubenswrapper[5043]: I1125 08:19:47.623535 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r2glw" Nov 25 08:19:54 crc kubenswrapper[5043]: I1125 08:19:54.962887 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:19:54 crc kubenswrapper[5043]: E1125 08:19:54.963762 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:19:57 crc kubenswrapper[5043]: I1125 08:19:57.426816 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r2glw" Nov 25 08:19:57 crc kubenswrapper[5043]: I1125 08:19:57.469303 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r2glw"] Nov 25 08:19:57 crc kubenswrapper[5043]: I1125 08:19:57.824813 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r2glw" podUID="9fa808df-d102-4208-a962-8033764f3e82" containerName="registry-server" containerID="cri-o://63b56b22c91d40eee2cf36daa39391bae8727fe1502c83e4f12a0807afd74f92" gracePeriod=2 Nov 25 08:19:58 crc kubenswrapper[5043]: I1125 08:19:58.833488 5043 generic.go:334] "Generic (PLEG): container finished" podID="9fa808df-d102-4208-a962-8033764f3e82" containerID="63b56b22c91d40eee2cf36daa39391bae8727fe1502c83e4f12a0807afd74f92" exitCode=0 Nov 25 08:19:58 crc kubenswrapper[5043]: I1125 08:19:58.833683 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2glw" event={"ID":"9fa808df-d102-4208-a962-8033764f3e82","Type":"ContainerDied","Data":"63b56b22c91d40eee2cf36daa39391bae8727fe1502c83e4f12a0807afd74f92"} Nov 25 08:19:58 crc kubenswrapper[5043]: I1125 08:19:58.833878 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2glw" event={"ID":"9fa808df-d102-4208-a962-8033764f3e82","Type":"ContainerDied","Data":"c89e5061527b0e565aee88c307fb32dab2ebebeb25fa33e51be5fd75acb74511"} Nov 25 08:19:58 crc kubenswrapper[5043]: I1125 08:19:58.833913 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c89e5061527b0e565aee88c307fb32dab2ebebeb25fa33e51be5fd75acb74511" Nov 25 08:19:58 crc kubenswrapper[5043]: I1125 08:19:58.884812 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2glw" Nov 25 08:19:58 crc kubenswrapper[5043]: I1125 08:19:58.976044 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fa808df-d102-4208-a962-8033764f3e82-catalog-content\") pod \"9fa808df-d102-4208-a962-8033764f3e82\" (UID: \"9fa808df-d102-4208-a962-8033764f3e82\") " Nov 25 08:19:58 crc kubenswrapper[5043]: I1125 08:19:58.976331 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fa808df-d102-4208-a962-8033764f3e82-utilities\") pod \"9fa808df-d102-4208-a962-8033764f3e82\" (UID: \"9fa808df-d102-4208-a962-8033764f3e82\") " Nov 25 08:19:58 crc kubenswrapper[5043]: I1125 08:19:58.976405 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w698\" (UniqueName: \"kubernetes.io/projected/9fa808df-d102-4208-a962-8033764f3e82-kube-api-access-2w698\") pod \"9fa808df-d102-4208-a962-8033764f3e82\" (UID: \"9fa808df-d102-4208-a962-8033764f3e82\") " Nov 25 08:19:58 crc kubenswrapper[5043]: I1125 08:19:58.977474 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fa808df-d102-4208-a962-8033764f3e82-utilities" (OuterVolumeSpecName: "utilities") pod "9fa808df-d102-4208-a962-8033764f3e82" (UID: "9fa808df-d102-4208-a962-8033764f3e82"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:19:58 crc kubenswrapper[5043]: I1125 08:19:58.986919 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa808df-d102-4208-a962-8033764f3e82-kube-api-access-2w698" (OuterVolumeSpecName: "kube-api-access-2w698") pod "9fa808df-d102-4208-a962-8033764f3e82" (UID: "9fa808df-d102-4208-a962-8033764f3e82"). InnerVolumeSpecName "kube-api-access-2w698". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:19:59 crc kubenswrapper[5043]: I1125 08:19:59.037101 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fa808df-d102-4208-a962-8033764f3e82-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9fa808df-d102-4208-a962-8033764f3e82" (UID: "9fa808df-d102-4208-a962-8033764f3e82"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:19:59 crc kubenswrapper[5043]: I1125 08:19:59.078355 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fa808df-d102-4208-a962-8033764f3e82-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:59 crc kubenswrapper[5043]: I1125 08:19:59.078390 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w698\" (UniqueName: \"kubernetes.io/projected/9fa808df-d102-4208-a962-8033764f3e82-kube-api-access-2w698\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:59 crc kubenswrapper[5043]: I1125 08:19:59.078403 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fa808df-d102-4208-a962-8033764f3e82-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:19:59 crc kubenswrapper[5043]: I1125 08:19:59.844286 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2glw" Nov 25 08:19:59 crc kubenswrapper[5043]: I1125 08:19:59.891549 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r2glw"] Nov 25 08:19:59 crc kubenswrapper[5043]: I1125 08:19:59.903211 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r2glw"] Nov 25 08:20:00 crc kubenswrapper[5043]: I1125 08:20:00.975668 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa808df-d102-4208-a962-8033764f3e82" path="/var/lib/kubelet/pods/9fa808df-d102-4208-a962-8033764f3e82/volumes" Nov 25 08:20:09 crc kubenswrapper[5043]: I1125 08:20:09.963515 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:20:09 crc kubenswrapper[5043]: E1125 08:20:09.964443 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:20:19 crc kubenswrapper[5043]: I1125 08:20:19.046961 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-3f80-account-create-rmltq"] Nov 25 08:20:19 crc kubenswrapper[5043]: I1125 08:20:19.065060 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-lbw85"] Nov 25 08:20:19 crc kubenswrapper[5043]: I1125 08:20:19.072967 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-3f80-account-create-rmltq"] Nov 25 08:20:19 crc kubenswrapper[5043]: I1125 08:20:19.081357 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-lbw85"] Nov 25 08:20:20 crc kubenswrapper[5043]: I1125 08:20:20.972131 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b9196a7-33c6-4492-957a-ed5aa71eceb8" path="/var/lib/kubelet/pods/3b9196a7-33c6-4492-957a-ed5aa71eceb8/volumes" Nov 25 08:20:20 crc kubenswrapper[5043]: I1125 08:20:20.973071 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e062dd66-19a1-44b9-834b-ff85c094ab5f" path="/var/lib/kubelet/pods/e062dd66-19a1-44b9-834b-ff85c094ab5f/volumes" Nov 25 08:20:21 crc kubenswrapper[5043]: I1125 08:20:21.962702 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:20:21 crc kubenswrapper[5043]: E1125 08:20:21.963582 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:20:32 crc kubenswrapper[5043]: I1125 08:20:32.963321 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:20:32 crc kubenswrapper[5043]: E1125 08:20:32.964112 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:20:44 crc kubenswrapper[5043]: I1125 08:20:44.963218 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:20:44 crc kubenswrapper[5043]: E1125 08:20:44.964556 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:20:53 crc kubenswrapper[5043]: I1125 08:20:53.070859 5043 scope.go:117] "RemoveContainer" containerID="01f1f894a7c0c68d47415b4c0373e55cb49380fe759c4b79a83d1268394425b8" Nov 25 08:20:53 crc kubenswrapper[5043]: I1125 08:20:53.101446 5043 scope.go:117] "RemoveContainer" containerID="7326d1a74a041b5e7812f8e35b51c1b790172b5d8faa18205d6f6ab485fa11d6" Nov 25 08:20:59 crc kubenswrapper[5043]: I1125 08:20:59.962914 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:21:00 crc kubenswrapper[5043]: I1125 08:21:00.405508 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"d5b83ad8dad799d66ce8f61512185c4db8a13cb3f9a3988fdc71acc001aeca18"} Nov 25 08:21:13 crc kubenswrapper[5043]: I1125 08:21:13.058732 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-mrbzh"] Nov 25 08:21:13 crc kubenswrapper[5043]: I1125 08:21:13.068379 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-mrbzh"] Nov 25 08:21:14 crc kubenswrapper[5043]: I1125 08:21:14.976830 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8971f2a4-499c-4de4-b3f8-aebadd052ef7" path="/var/lib/kubelet/pods/8971f2a4-499c-4de4-b3f8-aebadd052ef7/volumes" Nov 25 08:21:53 crc kubenswrapper[5043]: I1125 08:21:53.191101 5043 scope.go:117] "RemoveContainer" containerID="4b6eb96901903644313e5644df2c78bdf81680fb3c9a0e1282481d35c1223d64" Nov 25 08:21:55 crc kubenswrapper[5043]: I1125 08:21:55.417954 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p2dcm"] Nov 25 08:21:55 crc kubenswrapper[5043]: E1125 08:21:55.420676 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa808df-d102-4208-a962-8033764f3e82" containerName="extract-content" Nov 25 08:21:55 crc kubenswrapper[5043]: I1125 08:21:55.420700 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa808df-d102-4208-a962-8033764f3e82" containerName="extract-content" Nov 25 08:21:55 crc kubenswrapper[5043]: E1125 08:21:55.420717 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa808df-d102-4208-a962-8033764f3e82" containerName="extract-utilities" Nov 25 08:21:55 crc kubenswrapper[5043]: I1125 08:21:55.420723 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa808df-d102-4208-a962-8033764f3e82" containerName="extract-utilities" Nov 25 08:21:55 crc kubenswrapper[5043]: E1125 08:21:55.420731 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa808df-d102-4208-a962-8033764f3e82" containerName="registry-server" Nov 25 08:21:55 crc kubenswrapper[5043]: I1125 08:21:55.420739 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa808df-d102-4208-a962-8033764f3e82" containerName="registry-server" Nov 25 08:21:55 crc kubenswrapper[5043]: I1125 08:21:55.420987 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa808df-d102-4208-a962-8033764f3e82" containerName="registry-server" Nov 25 08:21:55 crc kubenswrapper[5043]: I1125 08:21:55.422295 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2dcm" Nov 25 08:21:55 crc kubenswrapper[5043]: I1125 08:21:55.459510 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p2dcm"] Nov 25 08:21:55 crc kubenswrapper[5043]: I1125 08:21:55.557840 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-catalog-content\") pod \"redhat-operators-p2dcm\" (UID: \"4169c67e-ab73-44ea-8c71-7f3bf8834fa0\") " pod="openshift-marketplace/redhat-operators-p2dcm" Nov 25 08:21:55 crc kubenswrapper[5043]: I1125 08:21:55.558308 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-utilities\") pod \"redhat-operators-p2dcm\" (UID: \"4169c67e-ab73-44ea-8c71-7f3bf8834fa0\") " pod="openshift-marketplace/redhat-operators-p2dcm" Nov 25 08:21:55 crc kubenswrapper[5043]: I1125 08:21:55.558783 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdl42\" (UniqueName: \"kubernetes.io/projected/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-kube-api-access-qdl42\") pod \"redhat-operators-p2dcm\" (UID: \"4169c67e-ab73-44ea-8c71-7f3bf8834fa0\") " pod="openshift-marketplace/redhat-operators-p2dcm" Nov 25 08:21:55 crc kubenswrapper[5043]: I1125 08:21:55.660935 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-catalog-content\") pod \"redhat-operators-p2dcm\" (UID: \"4169c67e-ab73-44ea-8c71-7f3bf8834fa0\") " pod="openshift-marketplace/redhat-operators-p2dcm" Nov 25 08:21:55 crc kubenswrapper[5043]: I1125 08:21:55.660977 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-utilities\") pod \"redhat-operators-p2dcm\" (UID: \"4169c67e-ab73-44ea-8c71-7f3bf8834fa0\") " pod="openshift-marketplace/redhat-operators-p2dcm" Nov 25 08:21:55 crc kubenswrapper[5043]: I1125 08:21:55.661054 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdl42\" (UniqueName: \"kubernetes.io/projected/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-kube-api-access-qdl42\") pod \"redhat-operators-p2dcm\" (UID: \"4169c67e-ab73-44ea-8c71-7f3bf8834fa0\") " pod="openshift-marketplace/redhat-operators-p2dcm" Nov 25 08:21:55 crc kubenswrapper[5043]: I1125 08:21:55.661390 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-catalog-content\") pod \"redhat-operators-p2dcm\" (UID: \"4169c67e-ab73-44ea-8c71-7f3bf8834fa0\") " pod="openshift-marketplace/redhat-operators-p2dcm" Nov 25 08:21:55 crc kubenswrapper[5043]: I1125 08:21:55.661797 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-utilities\") pod \"redhat-operators-p2dcm\" (UID: \"4169c67e-ab73-44ea-8c71-7f3bf8834fa0\") " pod="openshift-marketplace/redhat-operators-p2dcm" Nov 25 08:21:55 crc kubenswrapper[5043]: I1125 08:21:55.689410 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdl42\" (UniqueName: \"kubernetes.io/projected/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-kube-api-access-qdl42\") pod \"redhat-operators-p2dcm\" (UID: \"4169c67e-ab73-44ea-8c71-7f3bf8834fa0\") " pod="openshift-marketplace/redhat-operators-p2dcm" Nov 25 08:21:55 crc kubenswrapper[5043]: I1125 08:21:55.752004 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2dcm" Nov 25 08:21:56 crc kubenswrapper[5043]: W1125 08:21:56.310928 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4169c67e_ab73_44ea_8c71_7f3bf8834fa0.slice/crio-e4fd899069384231a05447e9bd177fbd84458185a59a929b86aefc2d3b43f0b0 WatchSource:0}: Error finding container e4fd899069384231a05447e9bd177fbd84458185a59a929b86aefc2d3b43f0b0: Status 404 returned error can't find the container with id e4fd899069384231a05447e9bd177fbd84458185a59a929b86aefc2d3b43f0b0 Nov 25 08:21:56 crc kubenswrapper[5043]: I1125 08:21:56.317684 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p2dcm"] Nov 25 08:21:56 crc kubenswrapper[5043]: I1125 08:21:56.418411 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qvg2c"] Nov 25 08:21:56 crc kubenswrapper[5043]: I1125 08:21:56.420619 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvg2c" Nov 25 08:21:56 crc kubenswrapper[5043]: I1125 08:21:56.448116 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvg2c"] Nov 25 08:21:56 crc kubenswrapper[5043]: I1125 08:21:56.495066 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca237b7-ff78-4c72-a551-23add94f7132-catalog-content\") pod \"community-operators-qvg2c\" (UID: \"6ca237b7-ff78-4c72-a551-23add94f7132\") " pod="openshift-marketplace/community-operators-qvg2c" Nov 25 08:21:56 crc kubenswrapper[5043]: I1125 08:21:56.495149 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca237b7-ff78-4c72-a551-23add94f7132-utilities\") pod \"community-operators-qvg2c\" (UID: \"6ca237b7-ff78-4c72-a551-23add94f7132\") " pod="openshift-marketplace/community-operators-qvg2c" Nov 25 08:21:56 crc kubenswrapper[5043]: I1125 08:21:56.495209 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r2qc\" (UniqueName: \"kubernetes.io/projected/6ca237b7-ff78-4c72-a551-23add94f7132-kube-api-access-7r2qc\") pod \"community-operators-qvg2c\" (UID: \"6ca237b7-ff78-4c72-a551-23add94f7132\") " pod="openshift-marketplace/community-operators-qvg2c" Nov 25 08:21:56 crc kubenswrapper[5043]: I1125 08:21:56.596728 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca237b7-ff78-4c72-a551-23add94f7132-utilities\") pod \"community-operators-qvg2c\" (UID: \"6ca237b7-ff78-4c72-a551-23add94f7132\") " pod="openshift-marketplace/community-operators-qvg2c" Nov 25 08:21:56 crc kubenswrapper[5043]: I1125 08:21:56.597014 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r2qc\" (UniqueName: \"kubernetes.io/projected/6ca237b7-ff78-4c72-a551-23add94f7132-kube-api-access-7r2qc\") pod \"community-operators-qvg2c\" (UID: \"6ca237b7-ff78-4c72-a551-23add94f7132\") " pod="openshift-marketplace/community-operators-qvg2c" Nov 25 08:21:56 crc kubenswrapper[5043]: I1125 08:21:56.597256 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca237b7-ff78-4c72-a551-23add94f7132-catalog-content\") pod \"community-operators-qvg2c\" (UID: \"6ca237b7-ff78-4c72-a551-23add94f7132\") " pod="openshift-marketplace/community-operators-qvg2c" Nov 25 08:21:56 crc kubenswrapper[5043]: I1125 08:21:56.597280 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca237b7-ff78-4c72-a551-23add94f7132-utilities\") pod \"community-operators-qvg2c\" (UID: \"6ca237b7-ff78-4c72-a551-23add94f7132\") " pod="openshift-marketplace/community-operators-qvg2c" Nov 25 08:21:56 crc kubenswrapper[5043]: I1125 08:21:56.597735 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca237b7-ff78-4c72-a551-23add94f7132-catalog-content\") pod \"community-operators-qvg2c\" (UID: \"6ca237b7-ff78-4c72-a551-23add94f7132\") " pod="openshift-marketplace/community-operators-qvg2c" Nov 25 08:21:56 crc kubenswrapper[5043]: I1125 08:21:56.617386 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r2qc\" (UniqueName: \"kubernetes.io/projected/6ca237b7-ff78-4c72-a551-23add94f7132-kube-api-access-7r2qc\") pod \"community-operators-qvg2c\" (UID: \"6ca237b7-ff78-4c72-a551-23add94f7132\") " pod="openshift-marketplace/community-operators-qvg2c" Nov 25 08:21:56 crc kubenswrapper[5043]: I1125 08:21:56.738388 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvg2c" Nov 25 08:21:57 crc kubenswrapper[5043]: I1125 08:21:57.052709 5043 generic.go:334] "Generic (PLEG): container finished" podID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" containerID="c618cbbd821286f3569b85c34b5c0b4c85b308f304f9ded24e68ee5890813cda" exitCode=0 Nov 25 08:21:57 crc kubenswrapper[5043]: I1125 08:21:57.052934 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2dcm" event={"ID":"4169c67e-ab73-44ea-8c71-7f3bf8834fa0","Type":"ContainerDied","Data":"c618cbbd821286f3569b85c34b5c0b4c85b308f304f9ded24e68ee5890813cda"} Nov 25 08:21:57 crc kubenswrapper[5043]: I1125 08:21:57.052958 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2dcm" event={"ID":"4169c67e-ab73-44ea-8c71-7f3bf8834fa0","Type":"ContainerStarted","Data":"e4fd899069384231a05447e9bd177fbd84458185a59a929b86aefc2d3b43f0b0"} Nov 25 08:21:57 crc kubenswrapper[5043]: I1125 08:21:57.060097 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:21:57 crc kubenswrapper[5043]: I1125 08:21:57.303050 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvg2c"] Nov 25 08:21:58 crc kubenswrapper[5043]: I1125 08:21:58.074405 5043 generic.go:334] "Generic (PLEG): container finished" podID="6ca237b7-ff78-4c72-a551-23add94f7132" containerID="0122177ffe418b8ea77498440a46fff7cabe814c8291a487e769ec93593a3301" exitCode=0 Nov 25 08:21:58 crc kubenswrapper[5043]: I1125 08:21:58.074690 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvg2c" event={"ID":"6ca237b7-ff78-4c72-a551-23add94f7132","Type":"ContainerDied","Data":"0122177ffe418b8ea77498440a46fff7cabe814c8291a487e769ec93593a3301"} Nov 25 08:21:58 crc kubenswrapper[5043]: I1125 08:21:58.074726 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvg2c" event={"ID":"6ca237b7-ff78-4c72-a551-23add94f7132","Type":"ContainerStarted","Data":"9314055eb949cb1e631e4ba517220a3d5fcfaed1d83414ee10e3cfe69a205587"} Nov 25 08:21:59 crc kubenswrapper[5043]: I1125 08:21:59.091896 5043 generic.go:334] "Generic (PLEG): container finished" podID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" containerID="c3fe535d58157ae41223178b1de522b9bc78f4379812a86e1177f12114319a1b" exitCode=0 Nov 25 08:21:59 crc kubenswrapper[5043]: I1125 08:21:59.092001 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2dcm" event={"ID":"4169c67e-ab73-44ea-8c71-7f3bf8834fa0","Type":"ContainerDied","Data":"c3fe535d58157ae41223178b1de522b9bc78f4379812a86e1177f12114319a1b"} Nov 25 08:22:00 crc kubenswrapper[5043]: I1125 08:22:00.106753 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2dcm" event={"ID":"4169c67e-ab73-44ea-8c71-7f3bf8834fa0","Type":"ContainerStarted","Data":"53b00dce396a9de0938a493e88f66135b60996be95f8b1858eef9285cde658f6"} Nov 25 08:22:00 crc kubenswrapper[5043]: I1125 08:22:00.109994 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvg2c" event={"ID":"6ca237b7-ff78-4c72-a551-23add94f7132","Type":"ContainerStarted","Data":"ac90080eef97d3d98d90e65e6c640d3802144e1db202cf2a3bf85266e7151d1f"} Nov 25 08:22:00 crc kubenswrapper[5043]: I1125 08:22:00.162145 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p2dcm" podStartSLOduration=2.587016086 podStartE2EDuration="5.162115727s" podCreationTimestamp="2025-11-25 08:21:55 +0000 UTC" firstStartedPulling="2025-11-25 08:21:57.059907235 +0000 UTC m=+3981.228102956" lastFinishedPulling="2025-11-25 08:21:59.635006876 +0000 UTC m=+3983.803202597" observedRunningTime="2025-11-25 08:22:00.136755806 +0000 UTC m=+3984.304951547" watchObservedRunningTime="2025-11-25 08:22:00.162115727 +0000 UTC m=+3984.330311438" Nov 25 08:22:04 crc kubenswrapper[5043]: I1125 08:22:04.158437 5043 generic.go:334] "Generic (PLEG): container finished" podID="6ca237b7-ff78-4c72-a551-23add94f7132" containerID="ac90080eef97d3d98d90e65e6c640d3802144e1db202cf2a3bf85266e7151d1f" exitCode=0 Nov 25 08:22:04 crc kubenswrapper[5043]: I1125 08:22:04.158660 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvg2c" event={"ID":"6ca237b7-ff78-4c72-a551-23add94f7132","Type":"ContainerDied","Data":"ac90080eef97d3d98d90e65e6c640d3802144e1db202cf2a3bf85266e7151d1f"} Nov 25 08:22:05 crc kubenswrapper[5043]: I1125 08:22:05.171023 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvg2c" event={"ID":"6ca237b7-ff78-4c72-a551-23add94f7132","Type":"ContainerStarted","Data":"00b1eb8ad6b9bc230298af88ba7e97b30fb9b42817efa021a5c97c849a510aeb"} Nov 25 08:22:05 crc kubenswrapper[5043]: I1125 08:22:05.192316 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qvg2c" podStartSLOduration=2.645450281 podStartE2EDuration="9.192294716s" podCreationTimestamp="2025-11-25 08:21:56 +0000 UTC" firstStartedPulling="2025-11-25 08:21:58.07777006 +0000 UTC m=+3982.245965781" lastFinishedPulling="2025-11-25 08:22:04.624614495 +0000 UTC m=+3988.792810216" observedRunningTime="2025-11-25 08:22:05.18944706 +0000 UTC m=+3989.357642781" watchObservedRunningTime="2025-11-25 08:22:05.192294716 +0000 UTC m=+3989.360490437" Nov 25 08:22:05 crc kubenswrapper[5043]: I1125 08:22:05.753161 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p2dcm" Nov 25 08:22:05 crc kubenswrapper[5043]: I1125 08:22:05.753231 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p2dcm" Nov 25 08:22:06 crc kubenswrapper[5043]: I1125 08:22:06.741155 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qvg2c" Nov 25 08:22:06 crc kubenswrapper[5043]: I1125 08:22:06.741580 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qvg2c" Nov 25 08:22:06 crc kubenswrapper[5043]: I1125 08:22:06.803531 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p2dcm" podUID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" containerName="registry-server" probeResult="failure" output=< Nov 25 08:22:06 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 08:22:06 crc kubenswrapper[5043]: > Nov 25 08:22:07 crc kubenswrapper[5043]: I1125 08:22:07.790127 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-qvg2c" podUID="6ca237b7-ff78-4c72-a551-23add94f7132" containerName="registry-server" probeResult="failure" output=< Nov 25 08:22:07 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 08:22:07 crc kubenswrapper[5043]: > Nov 25 08:22:16 crc kubenswrapper[5043]: I1125 08:22:16.786097 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qvg2c" Nov 25 08:22:16 crc kubenswrapper[5043]: I1125 08:22:16.810067 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p2dcm" podUID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" containerName="registry-server" probeResult="failure" output=< Nov 25 08:22:16 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 08:22:16 crc kubenswrapper[5043]: > Nov 25 08:22:16 crc kubenswrapper[5043]: I1125 08:22:16.840443 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qvg2c" Nov 25 08:22:17 crc kubenswrapper[5043]: I1125 08:22:17.026874 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qvg2c"] Nov 25 08:22:18 crc kubenswrapper[5043]: I1125 08:22:18.286060 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qvg2c" podUID="6ca237b7-ff78-4c72-a551-23add94f7132" containerName="registry-server" containerID="cri-o://00b1eb8ad6b9bc230298af88ba7e97b30fb9b42817efa021a5c97c849a510aeb" gracePeriod=2 Nov 25 08:22:18 crc kubenswrapper[5043]: I1125 08:22:18.982342 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvg2c" Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.068974 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca237b7-ff78-4c72-a551-23add94f7132-catalog-content\") pod \"6ca237b7-ff78-4c72-a551-23add94f7132\" (UID: \"6ca237b7-ff78-4c72-a551-23add94f7132\") " Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.069153 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r2qc\" (UniqueName: \"kubernetes.io/projected/6ca237b7-ff78-4c72-a551-23add94f7132-kube-api-access-7r2qc\") pod \"6ca237b7-ff78-4c72-a551-23add94f7132\" (UID: \"6ca237b7-ff78-4c72-a551-23add94f7132\") " Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.069326 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca237b7-ff78-4c72-a551-23add94f7132-utilities\") pod \"6ca237b7-ff78-4c72-a551-23add94f7132\" (UID: \"6ca237b7-ff78-4c72-a551-23add94f7132\") " Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.301861 5043 generic.go:334] "Generic (PLEG): container finished" podID="6ca237b7-ff78-4c72-a551-23add94f7132" containerID="00b1eb8ad6b9bc230298af88ba7e97b30fb9b42817efa021a5c97c849a510aeb" exitCode=0 Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.301909 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvg2c" event={"ID":"6ca237b7-ff78-4c72-a551-23add94f7132","Type":"ContainerDied","Data":"00b1eb8ad6b9bc230298af88ba7e97b30fb9b42817efa021a5c97c849a510aeb"} Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.301938 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvg2c" event={"ID":"6ca237b7-ff78-4c72-a551-23add94f7132","Type":"ContainerDied","Data":"9314055eb949cb1e631e4ba517220a3d5fcfaed1d83414ee10e3cfe69a205587"} Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.301957 5043 scope.go:117] "RemoveContainer" containerID="00b1eb8ad6b9bc230298af88ba7e97b30fb9b42817efa021a5c97c849a510aeb" Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.301955 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvg2c" Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.311103 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ca237b7-ff78-4c72-a551-23add94f7132-utilities" (OuterVolumeSpecName: "utilities") pod "6ca237b7-ff78-4c72-a551-23add94f7132" (UID: "6ca237b7-ff78-4c72-a551-23add94f7132"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.313100 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca237b7-ff78-4c72-a551-23add94f7132-kube-api-access-7r2qc" (OuterVolumeSpecName: "kube-api-access-7r2qc") pod "6ca237b7-ff78-4c72-a551-23add94f7132" (UID: "6ca237b7-ff78-4c72-a551-23add94f7132"). InnerVolumeSpecName "kube-api-access-7r2qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.376463 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r2qc\" (UniqueName: \"kubernetes.io/projected/6ca237b7-ff78-4c72-a551-23add94f7132-kube-api-access-7r2qc\") on node \"crc\" DevicePath \"\"" Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.376828 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ca237b7-ff78-4c72-a551-23add94f7132-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.385778 5043 scope.go:117] "RemoveContainer" containerID="ac90080eef97d3d98d90e65e6c640d3802144e1db202cf2a3bf85266e7151d1f" Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.412771 5043 scope.go:117] "RemoveContainer" containerID="0122177ffe418b8ea77498440a46fff7cabe814c8291a487e769ec93593a3301" Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.458574 5043 scope.go:117] "RemoveContainer" containerID="00b1eb8ad6b9bc230298af88ba7e97b30fb9b42817efa021a5c97c849a510aeb" Nov 25 08:22:19 crc kubenswrapper[5043]: E1125 08:22:19.459238 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00b1eb8ad6b9bc230298af88ba7e97b30fb9b42817efa021a5c97c849a510aeb\": container with ID starting with 00b1eb8ad6b9bc230298af88ba7e97b30fb9b42817efa021a5c97c849a510aeb not found: ID does not exist" containerID="00b1eb8ad6b9bc230298af88ba7e97b30fb9b42817efa021a5c97c849a510aeb" Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.459280 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00b1eb8ad6b9bc230298af88ba7e97b30fb9b42817efa021a5c97c849a510aeb"} err="failed to get container status \"00b1eb8ad6b9bc230298af88ba7e97b30fb9b42817efa021a5c97c849a510aeb\": rpc error: code = NotFound desc = could not find container \"00b1eb8ad6b9bc230298af88ba7e97b30fb9b42817efa021a5c97c849a510aeb\": container with ID starting with 00b1eb8ad6b9bc230298af88ba7e97b30fb9b42817efa021a5c97c849a510aeb not found: ID does not exist" Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.459310 5043 scope.go:117] "RemoveContainer" containerID="ac90080eef97d3d98d90e65e6c640d3802144e1db202cf2a3bf85266e7151d1f" Nov 25 08:22:19 crc kubenswrapper[5043]: E1125 08:22:19.459683 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac90080eef97d3d98d90e65e6c640d3802144e1db202cf2a3bf85266e7151d1f\": container with ID starting with ac90080eef97d3d98d90e65e6c640d3802144e1db202cf2a3bf85266e7151d1f not found: ID does not exist" containerID="ac90080eef97d3d98d90e65e6c640d3802144e1db202cf2a3bf85266e7151d1f" Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.459731 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac90080eef97d3d98d90e65e6c640d3802144e1db202cf2a3bf85266e7151d1f"} err="failed to get container status \"ac90080eef97d3d98d90e65e6c640d3802144e1db202cf2a3bf85266e7151d1f\": rpc error: code = NotFound desc = could not find container \"ac90080eef97d3d98d90e65e6c640d3802144e1db202cf2a3bf85266e7151d1f\": container with ID starting with ac90080eef97d3d98d90e65e6c640d3802144e1db202cf2a3bf85266e7151d1f not found: ID does not exist" Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.459759 5043 scope.go:117] "RemoveContainer" containerID="0122177ffe418b8ea77498440a46fff7cabe814c8291a487e769ec93593a3301" Nov 25 08:22:19 crc kubenswrapper[5043]: E1125 08:22:19.460238 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0122177ffe418b8ea77498440a46fff7cabe814c8291a487e769ec93593a3301\": container with ID starting with 0122177ffe418b8ea77498440a46fff7cabe814c8291a487e769ec93593a3301 not found: ID does not exist" containerID="0122177ffe418b8ea77498440a46fff7cabe814c8291a487e769ec93593a3301" Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.460262 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0122177ffe418b8ea77498440a46fff7cabe814c8291a487e769ec93593a3301"} err="failed to get container status \"0122177ffe418b8ea77498440a46fff7cabe814c8291a487e769ec93593a3301\": rpc error: code = NotFound desc = could not find container \"0122177ffe418b8ea77498440a46fff7cabe814c8291a487e769ec93593a3301\": container with ID starting with 0122177ffe418b8ea77498440a46fff7cabe814c8291a487e769ec93593a3301 not found: ID does not exist" Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.472531 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ca237b7-ff78-4c72-a551-23add94f7132-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ca237b7-ff78-4c72-a551-23add94f7132" (UID: "6ca237b7-ff78-4c72-a551-23add94f7132"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.479146 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ca237b7-ff78-4c72-a551-23add94f7132-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.635225 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qvg2c"] Nov 25 08:22:19 crc kubenswrapper[5043]: I1125 08:22:19.645021 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qvg2c"] Nov 25 08:22:20 crc kubenswrapper[5043]: I1125 08:22:20.976018 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca237b7-ff78-4c72-a551-23add94f7132" path="/var/lib/kubelet/pods/6ca237b7-ff78-4c72-a551-23add94f7132/volumes" Nov 25 08:22:26 crc kubenswrapper[5043]: I1125 08:22:26.802995 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p2dcm" podUID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" containerName="registry-server" probeResult="failure" output=< Nov 25 08:22:26 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 08:22:26 crc kubenswrapper[5043]: > Nov 25 08:22:36 crc kubenswrapper[5043]: I1125 08:22:36.812205 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p2dcm" podUID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" containerName="registry-server" probeResult="failure" output=< Nov 25 08:22:36 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 08:22:36 crc kubenswrapper[5043]: > Nov 25 08:22:46 crc kubenswrapper[5043]: I1125 08:22:46.806018 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p2dcm" podUID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" containerName="registry-server" probeResult="failure" output=< Nov 25 08:22:46 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 08:22:46 crc kubenswrapper[5043]: > Nov 25 08:22:56 crc kubenswrapper[5043]: I1125 08:22:56.798085 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p2dcm" podUID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" containerName="registry-server" probeResult="failure" output=< Nov 25 08:22:56 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 08:22:56 crc kubenswrapper[5043]: > Nov 25 08:23:05 crc kubenswrapper[5043]: I1125 08:23:05.814553 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p2dcm" Nov 25 08:23:05 crc kubenswrapper[5043]: I1125 08:23:05.875483 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p2dcm" Nov 25 08:23:06 crc kubenswrapper[5043]: I1125 08:23:06.051903 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p2dcm"] Nov 25 08:23:07 crc kubenswrapper[5043]: I1125 08:23:07.764984 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p2dcm" podUID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" containerName="registry-server" containerID="cri-o://53b00dce396a9de0938a493e88f66135b60996be95f8b1858eef9285cde658f6" gracePeriod=2 Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.356949 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2dcm" Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.506416 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-catalog-content\") pod \"4169c67e-ab73-44ea-8c71-7f3bf8834fa0\" (UID: \"4169c67e-ab73-44ea-8c71-7f3bf8834fa0\") " Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.506546 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-utilities\") pod \"4169c67e-ab73-44ea-8c71-7f3bf8834fa0\" (UID: \"4169c67e-ab73-44ea-8c71-7f3bf8834fa0\") " Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.506730 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdl42\" (UniqueName: \"kubernetes.io/projected/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-kube-api-access-qdl42\") pod \"4169c67e-ab73-44ea-8c71-7f3bf8834fa0\" (UID: \"4169c67e-ab73-44ea-8c71-7f3bf8834fa0\") " Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.507043 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-utilities" (OuterVolumeSpecName: "utilities") pod "4169c67e-ab73-44ea-8c71-7f3bf8834fa0" (UID: "4169c67e-ab73-44ea-8c71-7f3bf8834fa0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.507386 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.512931 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-kube-api-access-qdl42" (OuterVolumeSpecName: "kube-api-access-qdl42") pod "4169c67e-ab73-44ea-8c71-7f3bf8834fa0" (UID: "4169c67e-ab73-44ea-8c71-7f3bf8834fa0"). InnerVolumeSpecName "kube-api-access-qdl42". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.601649 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4169c67e-ab73-44ea-8c71-7f3bf8834fa0" (UID: "4169c67e-ab73-44ea-8c71-7f3bf8834fa0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.609536 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdl42\" (UniqueName: \"kubernetes.io/projected/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-kube-api-access-qdl42\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.609566 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4169c67e-ab73-44ea-8c71-7f3bf8834fa0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.776618 5043 generic.go:334] "Generic (PLEG): container finished" podID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" containerID="53b00dce396a9de0938a493e88f66135b60996be95f8b1858eef9285cde658f6" exitCode=0 Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.776691 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2dcm" event={"ID":"4169c67e-ab73-44ea-8c71-7f3bf8834fa0","Type":"ContainerDied","Data":"53b00dce396a9de0938a493e88f66135b60996be95f8b1858eef9285cde658f6"} Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.776739 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2dcm" event={"ID":"4169c67e-ab73-44ea-8c71-7f3bf8834fa0","Type":"ContainerDied","Data":"e4fd899069384231a05447e9bd177fbd84458185a59a929b86aefc2d3b43f0b0"} Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.776741 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2dcm" Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.776762 5043 scope.go:117] "RemoveContainer" containerID="53b00dce396a9de0938a493e88f66135b60996be95f8b1858eef9285cde658f6" Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.815931 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p2dcm"] Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.819643 5043 scope.go:117] "RemoveContainer" containerID="c3fe535d58157ae41223178b1de522b9bc78f4379812a86e1177f12114319a1b" Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.825663 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p2dcm"] Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.860405 5043 scope.go:117] "RemoveContainer" containerID="c618cbbd821286f3569b85c34b5c0b4c85b308f304f9ded24e68ee5890813cda" Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.892939 5043 scope.go:117] "RemoveContainer" containerID="53b00dce396a9de0938a493e88f66135b60996be95f8b1858eef9285cde658f6" Nov 25 08:23:08 crc kubenswrapper[5043]: E1125 08:23:08.896097 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53b00dce396a9de0938a493e88f66135b60996be95f8b1858eef9285cde658f6\": container with ID starting with 53b00dce396a9de0938a493e88f66135b60996be95f8b1858eef9285cde658f6 not found: ID does not exist" containerID="53b00dce396a9de0938a493e88f66135b60996be95f8b1858eef9285cde658f6" Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.896141 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53b00dce396a9de0938a493e88f66135b60996be95f8b1858eef9285cde658f6"} err="failed to get container status \"53b00dce396a9de0938a493e88f66135b60996be95f8b1858eef9285cde658f6\": rpc error: code = NotFound desc = could not find container \"53b00dce396a9de0938a493e88f66135b60996be95f8b1858eef9285cde658f6\": container with ID starting with 53b00dce396a9de0938a493e88f66135b60996be95f8b1858eef9285cde658f6 not found: ID does not exist" Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.896169 5043 scope.go:117] "RemoveContainer" containerID="c3fe535d58157ae41223178b1de522b9bc78f4379812a86e1177f12114319a1b" Nov 25 08:23:08 crc kubenswrapper[5043]: E1125 08:23:08.897209 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3fe535d58157ae41223178b1de522b9bc78f4379812a86e1177f12114319a1b\": container with ID starting with c3fe535d58157ae41223178b1de522b9bc78f4379812a86e1177f12114319a1b not found: ID does not exist" containerID="c3fe535d58157ae41223178b1de522b9bc78f4379812a86e1177f12114319a1b" Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.897538 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3fe535d58157ae41223178b1de522b9bc78f4379812a86e1177f12114319a1b"} err="failed to get container status \"c3fe535d58157ae41223178b1de522b9bc78f4379812a86e1177f12114319a1b\": rpc error: code = NotFound desc = could not find container \"c3fe535d58157ae41223178b1de522b9bc78f4379812a86e1177f12114319a1b\": container with ID starting with c3fe535d58157ae41223178b1de522b9bc78f4379812a86e1177f12114319a1b not found: ID does not exist" Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.897587 5043 scope.go:117] "RemoveContainer" containerID="c618cbbd821286f3569b85c34b5c0b4c85b308f304f9ded24e68ee5890813cda" Nov 25 08:23:08 crc kubenswrapper[5043]: E1125 08:23:08.898515 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c618cbbd821286f3569b85c34b5c0b4c85b308f304f9ded24e68ee5890813cda\": container with ID starting with c618cbbd821286f3569b85c34b5c0b4c85b308f304f9ded24e68ee5890813cda not found: ID does not exist" containerID="c618cbbd821286f3569b85c34b5c0b4c85b308f304f9ded24e68ee5890813cda" Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.898555 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c618cbbd821286f3569b85c34b5c0b4c85b308f304f9ded24e68ee5890813cda"} err="failed to get container status \"c618cbbd821286f3569b85c34b5c0b4c85b308f304f9ded24e68ee5890813cda\": rpc error: code = NotFound desc = could not find container \"c618cbbd821286f3569b85c34b5c0b4c85b308f304f9ded24e68ee5890813cda\": container with ID starting with c618cbbd821286f3569b85c34b5c0b4c85b308f304f9ded24e68ee5890813cda not found: ID does not exist" Nov 25 08:23:08 crc kubenswrapper[5043]: I1125 08:23:08.974924 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" path="/var/lib/kubelet/pods/4169c67e-ab73-44ea-8c71-7f3bf8834fa0/volumes" Nov 25 08:23:17 crc kubenswrapper[5043]: I1125 08:23:17.276650 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:23:17 crc kubenswrapper[5043]: I1125 08:23:17.277164 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:23:47 crc kubenswrapper[5043]: I1125 08:23:47.276372 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:23:47 crc kubenswrapper[5043]: I1125 08:23:47.276838 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:24:17 crc kubenswrapper[5043]: I1125 08:24:17.276973 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:24:17 crc kubenswrapper[5043]: I1125 08:24:17.277624 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:24:17 crc kubenswrapper[5043]: I1125 08:24:17.277683 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 08:24:17 crc kubenswrapper[5043]: I1125 08:24:17.278563 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5b83ad8dad799d66ce8f61512185c4db8a13cb3f9a3988fdc71acc001aeca18"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:24:17 crc kubenswrapper[5043]: I1125 08:24:17.278640 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://d5b83ad8dad799d66ce8f61512185c4db8a13cb3f9a3988fdc71acc001aeca18" gracePeriod=600 Nov 25 08:24:18 crc kubenswrapper[5043]: I1125 08:24:18.404572 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="d5b83ad8dad799d66ce8f61512185c4db8a13cb3f9a3988fdc71acc001aeca18" exitCode=0 Nov 25 08:24:18 crc kubenswrapper[5043]: I1125 08:24:18.404630 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"d5b83ad8dad799d66ce8f61512185c4db8a13cb3f9a3988fdc71acc001aeca18"} Nov 25 08:24:18 crc kubenswrapper[5043]: I1125 08:24:18.405092 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548"} Nov 25 08:24:18 crc kubenswrapper[5043]: I1125 08:24:18.405112 5043 scope.go:117] "RemoveContainer" containerID="c360e5720ce1afa14b3dae380302e22917623fa3003f3445642b19b5f24c16cf" Nov 25 08:25:53 crc kubenswrapper[5043]: I1125 08:25:53.356465 5043 scope.go:117] "RemoveContainer" containerID="7c7c1f468429c24b09b0f4648ce6e455893d706ab3341c76d4ea3c3c742f19f2" Nov 25 08:25:53 crc kubenswrapper[5043]: I1125 08:25:53.388053 5043 scope.go:117] "RemoveContainer" containerID="63b56b22c91d40eee2cf36daa39391bae8727fe1502c83e4f12a0807afd74f92" Nov 25 08:25:53 crc kubenswrapper[5043]: I1125 08:25:53.446931 5043 scope.go:117] "RemoveContainer" containerID="133d2a1a1a9adc7261a2cc0255a4c44222f0a9297ec194684caa3cda36d93519" Nov 25 08:26:17 crc kubenswrapper[5043]: I1125 08:26:17.276273 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:26:17 crc kubenswrapper[5043]: I1125 08:26:17.276830 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:26:47 crc kubenswrapper[5043]: I1125 08:26:47.276711 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:26:47 crc kubenswrapper[5043]: I1125 08:26:47.277568 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:27:17 crc kubenswrapper[5043]: I1125 08:27:17.276281 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:27:17 crc kubenswrapper[5043]: I1125 08:27:17.276959 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:27:17 crc kubenswrapper[5043]: I1125 08:27:17.277022 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 08:27:17 crc kubenswrapper[5043]: I1125 08:27:17.277890 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:27:17 crc kubenswrapper[5043]: I1125 08:27:17.277948 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" gracePeriod=600 Nov 25 08:27:17 crc kubenswrapper[5043]: E1125 08:27:17.412481 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.131936 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" exitCode=0 Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.131980 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548"} Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.132012 5043 scope.go:117] "RemoveContainer" containerID="d5b83ad8dad799d66ce8f61512185c4db8a13cb3f9a3988fdc71acc001aeca18" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.132765 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:27:18 crc kubenswrapper[5043]: E1125 08:27:18.133037 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.289995 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p7cgf"] Nov 25 08:27:18 crc kubenswrapper[5043]: E1125 08:27:18.290761 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" containerName="registry-server" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.290775 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" containerName="registry-server" Nov 25 08:27:18 crc kubenswrapper[5043]: E1125 08:27:18.290788 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca237b7-ff78-4c72-a551-23add94f7132" containerName="extract-utilities" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.290794 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca237b7-ff78-4c72-a551-23add94f7132" containerName="extract-utilities" Nov 25 08:27:18 crc kubenswrapper[5043]: E1125 08:27:18.290812 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" containerName="extract-content" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.290818 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" containerName="extract-content" Nov 25 08:27:18 crc kubenswrapper[5043]: E1125 08:27:18.290828 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca237b7-ff78-4c72-a551-23add94f7132" containerName="registry-server" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.290834 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca237b7-ff78-4c72-a551-23add94f7132" containerName="registry-server" Nov 25 08:27:18 crc kubenswrapper[5043]: E1125 08:27:18.290845 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" containerName="extract-utilities" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.290851 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" containerName="extract-utilities" Nov 25 08:27:18 crc kubenswrapper[5043]: E1125 08:27:18.290860 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca237b7-ff78-4c72-a551-23add94f7132" containerName="extract-content" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.290866 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca237b7-ff78-4c72-a551-23add94f7132" containerName="extract-content" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.291061 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4169c67e-ab73-44ea-8c71-7f3bf8834fa0" containerName="registry-server" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.291075 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca237b7-ff78-4c72-a551-23add94f7132" containerName="registry-server" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.293693 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p7cgf" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.305994 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7cgf"] Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.448306 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjmwg\" (UniqueName: \"kubernetes.io/projected/3600a66d-7208-43bd-9f0a-4323d879d994-kube-api-access-mjmwg\") pod \"redhat-marketplace-p7cgf\" (UID: \"3600a66d-7208-43bd-9f0a-4323d879d994\") " pod="openshift-marketplace/redhat-marketplace-p7cgf" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.448591 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3600a66d-7208-43bd-9f0a-4323d879d994-utilities\") pod \"redhat-marketplace-p7cgf\" (UID: \"3600a66d-7208-43bd-9f0a-4323d879d994\") " pod="openshift-marketplace/redhat-marketplace-p7cgf" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.448849 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3600a66d-7208-43bd-9f0a-4323d879d994-catalog-content\") pod \"redhat-marketplace-p7cgf\" (UID: \"3600a66d-7208-43bd-9f0a-4323d879d994\") " pod="openshift-marketplace/redhat-marketplace-p7cgf" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.550749 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3600a66d-7208-43bd-9f0a-4323d879d994-utilities\") pod \"redhat-marketplace-p7cgf\" (UID: \"3600a66d-7208-43bd-9f0a-4323d879d994\") " pod="openshift-marketplace/redhat-marketplace-p7cgf" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.550829 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3600a66d-7208-43bd-9f0a-4323d879d994-catalog-content\") pod \"redhat-marketplace-p7cgf\" (UID: \"3600a66d-7208-43bd-9f0a-4323d879d994\") " pod="openshift-marketplace/redhat-marketplace-p7cgf" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.550921 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjmwg\" (UniqueName: \"kubernetes.io/projected/3600a66d-7208-43bd-9f0a-4323d879d994-kube-api-access-mjmwg\") pod \"redhat-marketplace-p7cgf\" (UID: \"3600a66d-7208-43bd-9f0a-4323d879d994\") " pod="openshift-marketplace/redhat-marketplace-p7cgf" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.551273 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3600a66d-7208-43bd-9f0a-4323d879d994-utilities\") pod \"redhat-marketplace-p7cgf\" (UID: \"3600a66d-7208-43bd-9f0a-4323d879d994\") " pod="openshift-marketplace/redhat-marketplace-p7cgf" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.551316 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3600a66d-7208-43bd-9f0a-4323d879d994-catalog-content\") pod \"redhat-marketplace-p7cgf\" (UID: \"3600a66d-7208-43bd-9f0a-4323d879d994\") " pod="openshift-marketplace/redhat-marketplace-p7cgf" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.571539 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjmwg\" (UniqueName: \"kubernetes.io/projected/3600a66d-7208-43bd-9f0a-4323d879d994-kube-api-access-mjmwg\") pod \"redhat-marketplace-p7cgf\" (UID: \"3600a66d-7208-43bd-9f0a-4323d879d994\") " pod="openshift-marketplace/redhat-marketplace-p7cgf" Nov 25 08:27:18 crc kubenswrapper[5043]: I1125 08:27:18.632893 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p7cgf" Nov 25 08:27:19 crc kubenswrapper[5043]: I1125 08:27:19.188513 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7cgf"] Nov 25 08:27:20 crc kubenswrapper[5043]: I1125 08:27:20.150039 5043 generic.go:334] "Generic (PLEG): container finished" podID="3600a66d-7208-43bd-9f0a-4323d879d994" containerID="91c20ae2c7c5c2150aed2b6aeb1fb3ab38b8a7a255d51aa16464a735515027c1" exitCode=0 Nov 25 08:27:20 crc kubenswrapper[5043]: I1125 08:27:20.150190 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7cgf" event={"ID":"3600a66d-7208-43bd-9f0a-4323d879d994","Type":"ContainerDied","Data":"91c20ae2c7c5c2150aed2b6aeb1fb3ab38b8a7a255d51aa16464a735515027c1"} Nov 25 08:27:20 crc kubenswrapper[5043]: I1125 08:27:20.150457 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7cgf" event={"ID":"3600a66d-7208-43bd-9f0a-4323d879d994","Type":"ContainerStarted","Data":"4ee0a0fcb969fb619b5d0da46bd672c4387f3f9630c31c3a087a117e5767fef3"} Nov 25 08:27:20 crc kubenswrapper[5043]: I1125 08:27:20.151888 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:27:22 crc kubenswrapper[5043]: I1125 08:27:22.175778 5043 generic.go:334] "Generic (PLEG): container finished" podID="3600a66d-7208-43bd-9f0a-4323d879d994" containerID="59e44a37adddb2d7827550e1f15981e88aa38fa15774a56bcdf2f8ff3fe2472d" exitCode=0 Nov 25 08:27:22 crc kubenswrapper[5043]: I1125 08:27:22.175857 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7cgf" event={"ID":"3600a66d-7208-43bd-9f0a-4323d879d994","Type":"ContainerDied","Data":"59e44a37adddb2d7827550e1f15981e88aa38fa15774a56bcdf2f8ff3fe2472d"} Nov 25 08:27:24 crc kubenswrapper[5043]: I1125 08:27:24.194835 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7cgf" event={"ID":"3600a66d-7208-43bd-9f0a-4323d879d994","Type":"ContainerStarted","Data":"2fd5de35cb1937aeb9f5e5007da606fcad2207ec9149dab44448420e07c9e21e"} Nov 25 08:27:24 crc kubenswrapper[5043]: I1125 08:27:24.222402 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p7cgf" podStartSLOduration=3.5382426970000003 podStartE2EDuration="6.222373697s" podCreationTimestamp="2025-11-25 08:27:18 +0000 UTC" firstStartedPulling="2025-11-25 08:27:20.151655515 +0000 UTC m=+4304.319851236" lastFinishedPulling="2025-11-25 08:27:22.835786515 +0000 UTC m=+4307.003982236" observedRunningTime="2025-11-25 08:27:24.216052316 +0000 UTC m=+4308.384248037" watchObservedRunningTime="2025-11-25 08:27:24.222373697 +0000 UTC m=+4308.390569418" Nov 25 08:27:28 crc kubenswrapper[5043]: I1125 08:27:28.633321 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p7cgf" Nov 25 08:27:28 crc kubenswrapper[5043]: I1125 08:27:28.633954 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p7cgf" Nov 25 08:27:28 crc kubenswrapper[5043]: I1125 08:27:28.695250 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p7cgf" Nov 25 08:27:29 crc kubenswrapper[5043]: I1125 08:27:29.298327 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p7cgf" Nov 25 08:27:30 crc kubenswrapper[5043]: I1125 08:27:30.278200 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7cgf"] Nov 25 08:27:31 crc kubenswrapper[5043]: I1125 08:27:31.258375 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p7cgf" podUID="3600a66d-7208-43bd-9f0a-4323d879d994" containerName="registry-server" containerID="cri-o://2fd5de35cb1937aeb9f5e5007da606fcad2207ec9149dab44448420e07c9e21e" gracePeriod=2 Nov 25 08:27:32 crc kubenswrapper[5043]: I1125 08:27:32.271257 5043 generic.go:334] "Generic (PLEG): container finished" podID="3600a66d-7208-43bd-9f0a-4323d879d994" containerID="2fd5de35cb1937aeb9f5e5007da606fcad2207ec9149dab44448420e07c9e21e" exitCode=0 Nov 25 08:27:32 crc kubenswrapper[5043]: I1125 08:27:32.271308 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7cgf" event={"ID":"3600a66d-7208-43bd-9f0a-4323d879d994","Type":"ContainerDied","Data":"2fd5de35cb1937aeb9f5e5007da606fcad2207ec9149dab44448420e07c9e21e"} Nov 25 08:27:32 crc kubenswrapper[5043]: I1125 08:27:32.588510 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p7cgf" Nov 25 08:27:32 crc kubenswrapper[5043]: I1125 08:27:32.737689 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjmwg\" (UniqueName: \"kubernetes.io/projected/3600a66d-7208-43bd-9f0a-4323d879d994-kube-api-access-mjmwg\") pod \"3600a66d-7208-43bd-9f0a-4323d879d994\" (UID: \"3600a66d-7208-43bd-9f0a-4323d879d994\") " Nov 25 08:27:32 crc kubenswrapper[5043]: I1125 08:27:32.737859 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3600a66d-7208-43bd-9f0a-4323d879d994-utilities\") pod \"3600a66d-7208-43bd-9f0a-4323d879d994\" (UID: \"3600a66d-7208-43bd-9f0a-4323d879d994\") " Nov 25 08:27:32 crc kubenswrapper[5043]: I1125 08:27:32.737898 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3600a66d-7208-43bd-9f0a-4323d879d994-catalog-content\") pod \"3600a66d-7208-43bd-9f0a-4323d879d994\" (UID: \"3600a66d-7208-43bd-9f0a-4323d879d994\") " Nov 25 08:27:32 crc kubenswrapper[5043]: I1125 08:27:32.739545 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3600a66d-7208-43bd-9f0a-4323d879d994-utilities" (OuterVolumeSpecName: "utilities") pod "3600a66d-7208-43bd-9f0a-4323d879d994" (UID: "3600a66d-7208-43bd-9f0a-4323d879d994"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:27:32 crc kubenswrapper[5043]: I1125 08:27:32.757037 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3600a66d-7208-43bd-9f0a-4323d879d994-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3600a66d-7208-43bd-9f0a-4323d879d994" (UID: "3600a66d-7208-43bd-9f0a-4323d879d994"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:27:32 crc kubenswrapper[5043]: I1125 08:27:32.841680 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3600a66d-7208-43bd-9f0a-4323d879d994-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:32 crc kubenswrapper[5043]: I1125 08:27:32.841735 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3600a66d-7208-43bd-9f0a-4323d879d994-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:32 crc kubenswrapper[5043]: I1125 08:27:32.962975 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:27:32 crc kubenswrapper[5043]: E1125 08:27:32.963508 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:27:33 crc kubenswrapper[5043]: I1125 08:27:33.167423 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3600a66d-7208-43bd-9f0a-4323d879d994-kube-api-access-mjmwg" (OuterVolumeSpecName: "kube-api-access-mjmwg") pod "3600a66d-7208-43bd-9f0a-4323d879d994" (UID: "3600a66d-7208-43bd-9f0a-4323d879d994"). InnerVolumeSpecName "kube-api-access-mjmwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:27:33 crc kubenswrapper[5043]: I1125 08:27:33.250488 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjmwg\" (UniqueName: \"kubernetes.io/projected/3600a66d-7208-43bd-9f0a-4323d879d994-kube-api-access-mjmwg\") on node \"crc\" DevicePath \"\"" Nov 25 08:27:33 crc kubenswrapper[5043]: I1125 08:27:33.289883 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7cgf" event={"ID":"3600a66d-7208-43bd-9f0a-4323d879d994","Type":"ContainerDied","Data":"4ee0a0fcb969fb619b5d0da46bd672c4387f3f9630c31c3a087a117e5767fef3"} Nov 25 08:27:33 crc kubenswrapper[5043]: I1125 08:27:33.289934 5043 scope.go:117] "RemoveContainer" containerID="2fd5de35cb1937aeb9f5e5007da606fcad2207ec9149dab44448420e07c9e21e" Nov 25 08:27:33 crc kubenswrapper[5043]: I1125 08:27:33.289979 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p7cgf" Nov 25 08:27:33 crc kubenswrapper[5043]: I1125 08:27:33.338051 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7cgf"] Nov 25 08:27:33 crc kubenswrapper[5043]: I1125 08:27:33.344980 5043 scope.go:117] "RemoveContainer" containerID="59e44a37adddb2d7827550e1f15981e88aa38fa15774a56bcdf2f8ff3fe2472d" Nov 25 08:27:33 crc kubenswrapper[5043]: I1125 08:27:33.348362 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7cgf"] Nov 25 08:27:33 crc kubenswrapper[5043]: I1125 08:27:33.367776 5043 scope.go:117] "RemoveContainer" containerID="91c20ae2c7c5c2150aed2b6aeb1fb3ab38b8a7a255d51aa16464a735515027c1" Nov 25 08:27:34 crc kubenswrapper[5043]: I1125 08:27:34.974260 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3600a66d-7208-43bd-9f0a-4323d879d994" path="/var/lib/kubelet/pods/3600a66d-7208-43bd-9f0a-4323d879d994/volumes" Nov 25 08:27:46 crc kubenswrapper[5043]: I1125 08:27:46.970083 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:27:46 crc kubenswrapper[5043]: E1125 08:27:46.970985 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:28:00 crc kubenswrapper[5043]: I1125 08:28:00.964767 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:28:00 crc kubenswrapper[5043]: E1125 08:28:00.965595 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:28:13 crc kubenswrapper[5043]: I1125 08:28:13.963452 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:28:13 crc kubenswrapper[5043]: E1125 08:28:13.964297 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:28:25 crc kubenswrapper[5043]: I1125 08:28:25.963000 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:28:25 crc kubenswrapper[5043]: E1125 08:28:25.963687 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:28:39 crc kubenswrapper[5043]: I1125 08:28:39.963552 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:28:39 crc kubenswrapper[5043]: E1125 08:28:39.964576 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:28:53 crc kubenswrapper[5043]: I1125 08:28:53.963241 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:28:53 crc kubenswrapper[5043]: E1125 08:28:53.963995 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:29:04 crc kubenswrapper[5043]: I1125 08:29:04.962513 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:29:04 crc kubenswrapper[5043]: E1125 08:29:04.964420 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:29:18 crc kubenswrapper[5043]: I1125 08:29:18.968499 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:29:18 crc kubenswrapper[5043]: E1125 08:29:18.970841 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:29:33 crc kubenswrapper[5043]: I1125 08:29:33.963284 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:29:33 crc kubenswrapper[5043]: E1125 08:29:33.964853 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:29:48 crc kubenswrapper[5043]: I1125 08:29:48.965978 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:29:48 crc kubenswrapper[5043]: E1125 08:29:48.966918 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.149644 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv"] Nov 25 08:30:00 crc kubenswrapper[5043]: E1125 08:30:00.150697 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3600a66d-7208-43bd-9f0a-4323d879d994" containerName="extract-content" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.150717 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3600a66d-7208-43bd-9f0a-4323d879d994" containerName="extract-content" Nov 25 08:30:00 crc kubenswrapper[5043]: E1125 08:30:00.150750 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3600a66d-7208-43bd-9f0a-4323d879d994" containerName="registry-server" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.150758 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3600a66d-7208-43bd-9f0a-4323d879d994" containerName="registry-server" Nov 25 08:30:00 crc kubenswrapper[5043]: E1125 08:30:00.150782 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3600a66d-7208-43bd-9f0a-4323d879d994" containerName="extract-utilities" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.150790 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3600a66d-7208-43bd-9f0a-4323d879d994" containerName="extract-utilities" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.150992 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="3600a66d-7208-43bd-9f0a-4323d879d994" containerName="registry-server" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.151709 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.154172 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.154655 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.158928 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv"] Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.276377 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f959cf7-6cf4-490b-aa23-68a958edd787-secret-volume\") pod \"collect-profiles-29400990-r8rgv\" (UID: \"8f959cf7-6cf4-490b-aa23-68a958edd787\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.276416 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f959cf7-6cf4-490b-aa23-68a958edd787-config-volume\") pod \"collect-profiles-29400990-r8rgv\" (UID: \"8f959cf7-6cf4-490b-aa23-68a958edd787\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.276828 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcxn9\" (UniqueName: \"kubernetes.io/projected/8f959cf7-6cf4-490b-aa23-68a958edd787-kube-api-access-bcxn9\") pod \"collect-profiles-29400990-r8rgv\" (UID: \"8f959cf7-6cf4-490b-aa23-68a958edd787\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.378720 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f959cf7-6cf4-490b-aa23-68a958edd787-secret-volume\") pod \"collect-profiles-29400990-r8rgv\" (UID: \"8f959cf7-6cf4-490b-aa23-68a958edd787\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.378773 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f959cf7-6cf4-490b-aa23-68a958edd787-config-volume\") pod \"collect-profiles-29400990-r8rgv\" (UID: \"8f959cf7-6cf4-490b-aa23-68a958edd787\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.378882 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcxn9\" (UniqueName: \"kubernetes.io/projected/8f959cf7-6cf4-490b-aa23-68a958edd787-kube-api-access-bcxn9\") pod \"collect-profiles-29400990-r8rgv\" (UID: \"8f959cf7-6cf4-490b-aa23-68a958edd787\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.379977 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f959cf7-6cf4-490b-aa23-68a958edd787-config-volume\") pod \"collect-profiles-29400990-r8rgv\" (UID: \"8f959cf7-6cf4-490b-aa23-68a958edd787\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.663369 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f959cf7-6cf4-490b-aa23-68a958edd787-secret-volume\") pod \"collect-profiles-29400990-r8rgv\" (UID: \"8f959cf7-6cf4-490b-aa23-68a958edd787\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.664272 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcxn9\" (UniqueName: \"kubernetes.io/projected/8f959cf7-6cf4-490b-aa23-68a958edd787-kube-api-access-bcxn9\") pod \"collect-profiles-29400990-r8rgv\" (UID: \"8f959cf7-6cf4-490b-aa23-68a958edd787\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" Nov 25 08:30:00 crc kubenswrapper[5043]: I1125 08:30:00.788020 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" Nov 25 08:30:01 crc kubenswrapper[5043]: I1125 08:30:01.267072 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv"] Nov 25 08:30:01 crc kubenswrapper[5043]: W1125 08:30:01.290394 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f959cf7_6cf4_490b_aa23_68a958edd787.slice/crio-cc69542efa83341514e43a1d8cd9fae8608c21c7c4b3d3e5147cfcbf6cf071e8 WatchSource:0}: Error finding container cc69542efa83341514e43a1d8cd9fae8608c21c7c4b3d3e5147cfcbf6cf071e8: Status 404 returned error can't find the container with id cc69542efa83341514e43a1d8cd9fae8608c21c7c4b3d3e5147cfcbf6cf071e8 Nov 25 08:30:01 crc kubenswrapper[5043]: I1125 08:30:01.792008 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" event={"ID":"8f959cf7-6cf4-490b-aa23-68a958edd787","Type":"ContainerStarted","Data":"e310f168050c717dcec285e9c0ce1d9c43a570faf19f739c4813efb7dc1e255c"} Nov 25 08:30:01 crc kubenswrapper[5043]: I1125 08:30:01.792751 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" event={"ID":"8f959cf7-6cf4-490b-aa23-68a958edd787","Type":"ContainerStarted","Data":"cc69542efa83341514e43a1d8cd9fae8608c21c7c4b3d3e5147cfcbf6cf071e8"} Nov 25 08:30:01 crc kubenswrapper[5043]: I1125 08:30:01.821068 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" podStartSLOduration=1.821048685 podStartE2EDuration="1.821048685s" podCreationTimestamp="2025-11-25 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:30:01.809239447 +0000 UTC m=+4465.977435158" watchObservedRunningTime="2025-11-25 08:30:01.821048685 +0000 UTC m=+4465.989244406" Nov 25 08:30:02 crc kubenswrapper[5043]: I1125 08:30:02.803456 5043 generic.go:334] "Generic (PLEG): container finished" podID="8f959cf7-6cf4-490b-aa23-68a958edd787" containerID="e310f168050c717dcec285e9c0ce1d9c43a570faf19f739c4813efb7dc1e255c" exitCode=0 Nov 25 08:30:02 crc kubenswrapper[5043]: I1125 08:30:02.803501 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" event={"ID":"8f959cf7-6cf4-490b-aa23-68a958edd787","Type":"ContainerDied","Data":"e310f168050c717dcec285e9c0ce1d9c43a570faf19f739c4813efb7dc1e255c"} Nov 25 08:30:02 crc kubenswrapper[5043]: I1125 08:30:02.963111 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:30:02 crc kubenswrapper[5043]: E1125 08:30:02.963849 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:30:04 crc kubenswrapper[5043]: I1125 08:30:04.420486 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" Nov 25 08:30:04 crc kubenswrapper[5043]: I1125 08:30:04.468836 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f959cf7-6cf4-490b-aa23-68a958edd787-config-volume\") pod \"8f959cf7-6cf4-490b-aa23-68a958edd787\" (UID: \"8f959cf7-6cf4-490b-aa23-68a958edd787\") " Nov 25 08:30:04 crc kubenswrapper[5043]: I1125 08:30:04.469017 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f959cf7-6cf4-490b-aa23-68a958edd787-secret-volume\") pod \"8f959cf7-6cf4-490b-aa23-68a958edd787\" (UID: \"8f959cf7-6cf4-490b-aa23-68a958edd787\") " Nov 25 08:30:04 crc kubenswrapper[5043]: I1125 08:30:04.469139 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcxn9\" (UniqueName: \"kubernetes.io/projected/8f959cf7-6cf4-490b-aa23-68a958edd787-kube-api-access-bcxn9\") pod \"8f959cf7-6cf4-490b-aa23-68a958edd787\" (UID: \"8f959cf7-6cf4-490b-aa23-68a958edd787\") " Nov 25 08:30:04 crc kubenswrapper[5043]: I1125 08:30:04.471287 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f959cf7-6cf4-490b-aa23-68a958edd787-config-volume" (OuterVolumeSpecName: "config-volume") pod "8f959cf7-6cf4-490b-aa23-68a958edd787" (UID: "8f959cf7-6cf4-490b-aa23-68a958edd787"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:30:04 crc kubenswrapper[5043]: I1125 08:30:04.475654 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f959cf7-6cf4-490b-aa23-68a958edd787-kube-api-access-bcxn9" (OuterVolumeSpecName: "kube-api-access-bcxn9") pod "8f959cf7-6cf4-490b-aa23-68a958edd787" (UID: "8f959cf7-6cf4-490b-aa23-68a958edd787"). InnerVolumeSpecName "kube-api-access-bcxn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:30:04 crc kubenswrapper[5043]: I1125 08:30:04.481714 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f959cf7-6cf4-490b-aa23-68a958edd787-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8f959cf7-6cf4-490b-aa23-68a958edd787" (UID: "8f959cf7-6cf4-490b-aa23-68a958edd787"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:30:04 crc kubenswrapper[5043]: I1125 08:30:04.571463 5043 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f959cf7-6cf4-490b-aa23-68a958edd787-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:04 crc kubenswrapper[5043]: I1125 08:30:04.571509 5043 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8f959cf7-6cf4-490b-aa23-68a958edd787-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:04 crc kubenswrapper[5043]: I1125 08:30:04.571523 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcxn9\" (UniqueName: \"kubernetes.io/projected/8f959cf7-6cf4-490b-aa23-68a958edd787-kube-api-access-bcxn9\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:04 crc kubenswrapper[5043]: I1125 08:30:04.823793 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" event={"ID":"8f959cf7-6cf4-490b-aa23-68a958edd787","Type":"ContainerDied","Data":"cc69542efa83341514e43a1d8cd9fae8608c21c7c4b3d3e5147cfcbf6cf071e8"} Nov 25 08:30:04 crc kubenswrapper[5043]: I1125 08:30:04.823832 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc69542efa83341514e43a1d8cd9fae8608c21c7c4b3d3e5147cfcbf6cf071e8" Nov 25 08:30:04 crc kubenswrapper[5043]: I1125 08:30:04.823971 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv" Nov 25 08:30:05 crc kubenswrapper[5043]: I1125 08:30:05.483665 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk"] Nov 25 08:30:05 crc kubenswrapper[5043]: I1125 08:30:05.491690 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400945-vj2rk"] Nov 25 08:30:06 crc kubenswrapper[5043]: I1125 08:30:06.974149 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06731370-602e-4cbf-acaa-ea2b0f758443" path="/var/lib/kubelet/pods/06731370-602e-4cbf-acaa-ea2b0f758443/volumes" Nov 25 08:30:15 crc kubenswrapper[5043]: I1125 08:30:15.705558 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8g2jn"] Nov 25 08:30:15 crc kubenswrapper[5043]: E1125 08:30:15.707368 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f959cf7-6cf4-490b-aa23-68a958edd787" containerName="collect-profiles" Nov 25 08:30:15 crc kubenswrapper[5043]: I1125 08:30:15.707740 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f959cf7-6cf4-490b-aa23-68a958edd787" containerName="collect-profiles" Nov 25 08:30:15 crc kubenswrapper[5043]: I1125 08:30:15.708049 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f959cf7-6cf4-490b-aa23-68a958edd787" containerName="collect-profiles" Nov 25 08:30:15 crc kubenswrapper[5043]: I1125 08:30:15.709702 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g2jn" Nov 25 08:30:15 crc kubenswrapper[5043]: I1125 08:30:15.720727 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8g2jn"] Nov 25 08:30:15 crc kubenswrapper[5043]: I1125 08:30:15.885687 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eca4ec55-572f-4b80-95a6-8a3a55322ad7-catalog-content\") pod \"certified-operators-8g2jn\" (UID: \"eca4ec55-572f-4b80-95a6-8a3a55322ad7\") " pod="openshift-marketplace/certified-operators-8g2jn" Nov 25 08:30:15 crc kubenswrapper[5043]: I1125 08:30:15.885787 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eca4ec55-572f-4b80-95a6-8a3a55322ad7-utilities\") pod \"certified-operators-8g2jn\" (UID: \"eca4ec55-572f-4b80-95a6-8a3a55322ad7\") " pod="openshift-marketplace/certified-operators-8g2jn" Nov 25 08:30:15 crc kubenswrapper[5043]: I1125 08:30:15.885838 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhkzt\" (UniqueName: \"kubernetes.io/projected/eca4ec55-572f-4b80-95a6-8a3a55322ad7-kube-api-access-jhkzt\") pod \"certified-operators-8g2jn\" (UID: \"eca4ec55-572f-4b80-95a6-8a3a55322ad7\") " pod="openshift-marketplace/certified-operators-8g2jn" Nov 25 08:30:15 crc kubenswrapper[5043]: I1125 08:30:15.987204 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eca4ec55-572f-4b80-95a6-8a3a55322ad7-catalog-content\") pod \"certified-operators-8g2jn\" (UID: \"eca4ec55-572f-4b80-95a6-8a3a55322ad7\") " pod="openshift-marketplace/certified-operators-8g2jn" Nov 25 08:30:15 crc kubenswrapper[5043]: I1125 08:30:15.987306 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eca4ec55-572f-4b80-95a6-8a3a55322ad7-utilities\") pod \"certified-operators-8g2jn\" (UID: \"eca4ec55-572f-4b80-95a6-8a3a55322ad7\") " pod="openshift-marketplace/certified-operators-8g2jn" Nov 25 08:30:15 crc kubenswrapper[5043]: I1125 08:30:15.987354 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhkzt\" (UniqueName: \"kubernetes.io/projected/eca4ec55-572f-4b80-95a6-8a3a55322ad7-kube-api-access-jhkzt\") pod \"certified-operators-8g2jn\" (UID: \"eca4ec55-572f-4b80-95a6-8a3a55322ad7\") " pod="openshift-marketplace/certified-operators-8g2jn" Nov 25 08:30:15 crc kubenswrapper[5043]: I1125 08:30:15.988113 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eca4ec55-572f-4b80-95a6-8a3a55322ad7-catalog-content\") pod \"certified-operators-8g2jn\" (UID: \"eca4ec55-572f-4b80-95a6-8a3a55322ad7\") " pod="openshift-marketplace/certified-operators-8g2jn" Nov 25 08:30:15 crc kubenswrapper[5043]: I1125 08:30:15.988141 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eca4ec55-572f-4b80-95a6-8a3a55322ad7-utilities\") pod \"certified-operators-8g2jn\" (UID: \"eca4ec55-572f-4b80-95a6-8a3a55322ad7\") " pod="openshift-marketplace/certified-operators-8g2jn" Nov 25 08:30:16 crc kubenswrapper[5043]: I1125 08:30:16.012641 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhkzt\" (UniqueName: \"kubernetes.io/projected/eca4ec55-572f-4b80-95a6-8a3a55322ad7-kube-api-access-jhkzt\") pod \"certified-operators-8g2jn\" (UID: \"eca4ec55-572f-4b80-95a6-8a3a55322ad7\") " pod="openshift-marketplace/certified-operators-8g2jn" Nov 25 08:30:16 crc kubenswrapper[5043]: I1125 08:30:16.028793 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g2jn" Nov 25 08:30:16 crc kubenswrapper[5043]: I1125 08:30:16.625063 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8g2jn"] Nov 25 08:30:16 crc kubenswrapper[5043]: I1125 08:30:16.919881 5043 generic.go:334] "Generic (PLEG): container finished" podID="eca4ec55-572f-4b80-95a6-8a3a55322ad7" containerID="0c803ced144cf9237638a17a7ea220d0f13c39b03c94d1439cdc8afd5e6418b3" exitCode=0 Nov 25 08:30:16 crc kubenswrapper[5043]: I1125 08:30:16.919964 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2jn" event={"ID":"eca4ec55-572f-4b80-95a6-8a3a55322ad7","Type":"ContainerDied","Data":"0c803ced144cf9237638a17a7ea220d0f13c39b03c94d1439cdc8afd5e6418b3"} Nov 25 08:30:16 crc kubenswrapper[5043]: I1125 08:30:16.920484 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2jn" event={"ID":"eca4ec55-572f-4b80-95a6-8a3a55322ad7","Type":"ContainerStarted","Data":"258a09e033e72d55f18378aac6733e29a6de3e63fa1e1147e8521338867e43c3"} Nov 25 08:30:17 crc kubenswrapper[5043]: I1125 08:30:17.933097 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2jn" event={"ID":"eca4ec55-572f-4b80-95a6-8a3a55322ad7","Type":"ContainerStarted","Data":"26722dc2d6ff345e80b09ef30498aff03c84adb78995d5d5a08b2f97a4bda7c7"} Nov 25 08:30:17 crc kubenswrapper[5043]: I1125 08:30:17.962665 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:30:17 crc kubenswrapper[5043]: E1125 08:30:17.963015 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:30:18 crc kubenswrapper[5043]: I1125 08:30:18.946856 5043 generic.go:334] "Generic (PLEG): container finished" podID="eca4ec55-572f-4b80-95a6-8a3a55322ad7" containerID="26722dc2d6ff345e80b09ef30498aff03c84adb78995d5d5a08b2f97a4bda7c7" exitCode=0 Nov 25 08:30:18 crc kubenswrapper[5043]: I1125 08:30:18.947006 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2jn" event={"ID":"eca4ec55-572f-4b80-95a6-8a3a55322ad7","Type":"ContainerDied","Data":"26722dc2d6ff345e80b09ef30498aff03c84adb78995d5d5a08b2f97a4bda7c7"} Nov 25 08:30:19 crc kubenswrapper[5043]: I1125 08:30:19.956426 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2jn" event={"ID":"eca4ec55-572f-4b80-95a6-8a3a55322ad7","Type":"ContainerStarted","Data":"8e4ac9b999d631405a63a5187294b771ea92a3fd20e7043429becbb87d3502fe"} Nov 25 08:30:19 crc kubenswrapper[5043]: I1125 08:30:19.979450 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8g2jn" podStartSLOduration=2.54702357 podStartE2EDuration="4.97942957s" podCreationTimestamp="2025-11-25 08:30:15 +0000 UTC" firstStartedPulling="2025-11-25 08:30:16.921690929 +0000 UTC m=+4481.089886650" lastFinishedPulling="2025-11-25 08:30:19.354096909 +0000 UTC m=+4483.522292650" observedRunningTime="2025-11-25 08:30:19.971818795 +0000 UTC m=+4484.140014546" watchObservedRunningTime="2025-11-25 08:30:19.97942957 +0000 UTC m=+4484.147625281" Nov 25 08:30:26 crc kubenswrapper[5043]: I1125 08:30:26.029542 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8g2jn" Nov 25 08:30:26 crc kubenswrapper[5043]: I1125 08:30:26.032795 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8g2jn" Nov 25 08:30:26 crc kubenswrapper[5043]: I1125 08:30:26.085207 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8g2jn" Nov 25 08:30:27 crc kubenswrapper[5043]: I1125 08:30:27.088203 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8g2jn" Nov 25 08:30:27 crc kubenswrapper[5043]: I1125 08:30:27.152758 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8g2jn"] Nov 25 08:30:29 crc kubenswrapper[5043]: I1125 08:30:29.039233 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8g2jn" podUID="eca4ec55-572f-4b80-95a6-8a3a55322ad7" containerName="registry-server" containerID="cri-o://8e4ac9b999d631405a63a5187294b771ea92a3fd20e7043429becbb87d3502fe" gracePeriod=2 Nov 25 08:30:29 crc kubenswrapper[5043]: I1125 08:30:29.725172 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g2jn" Nov 25 08:30:29 crc kubenswrapper[5043]: I1125 08:30:29.910708 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eca4ec55-572f-4b80-95a6-8a3a55322ad7-catalog-content\") pod \"eca4ec55-572f-4b80-95a6-8a3a55322ad7\" (UID: \"eca4ec55-572f-4b80-95a6-8a3a55322ad7\") " Nov 25 08:30:29 crc kubenswrapper[5043]: I1125 08:30:29.911338 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhkzt\" (UniqueName: \"kubernetes.io/projected/eca4ec55-572f-4b80-95a6-8a3a55322ad7-kube-api-access-jhkzt\") pod \"eca4ec55-572f-4b80-95a6-8a3a55322ad7\" (UID: \"eca4ec55-572f-4b80-95a6-8a3a55322ad7\") " Nov 25 08:30:29 crc kubenswrapper[5043]: I1125 08:30:29.911392 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eca4ec55-572f-4b80-95a6-8a3a55322ad7-utilities\") pod \"eca4ec55-572f-4b80-95a6-8a3a55322ad7\" (UID: \"eca4ec55-572f-4b80-95a6-8a3a55322ad7\") " Nov 25 08:30:29 crc kubenswrapper[5043]: I1125 08:30:29.914262 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eca4ec55-572f-4b80-95a6-8a3a55322ad7-utilities" (OuterVolumeSpecName: "utilities") pod "eca4ec55-572f-4b80-95a6-8a3a55322ad7" (UID: "eca4ec55-572f-4b80-95a6-8a3a55322ad7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:30:29 crc kubenswrapper[5043]: I1125 08:30:29.920340 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eca4ec55-572f-4b80-95a6-8a3a55322ad7-kube-api-access-jhkzt" (OuterVolumeSpecName: "kube-api-access-jhkzt") pod "eca4ec55-572f-4b80-95a6-8a3a55322ad7" (UID: "eca4ec55-572f-4b80-95a6-8a3a55322ad7"). InnerVolumeSpecName "kube-api-access-jhkzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:30:29 crc kubenswrapper[5043]: I1125 08:30:29.960395 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eca4ec55-572f-4b80-95a6-8a3a55322ad7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eca4ec55-572f-4b80-95a6-8a3a55322ad7" (UID: "eca4ec55-572f-4b80-95a6-8a3a55322ad7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:30:29 crc kubenswrapper[5043]: I1125 08:30:29.963391 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:30:29 crc kubenswrapper[5043]: E1125 08:30:29.963633 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.014428 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eca4ec55-572f-4b80-95a6-8a3a55322ad7-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.014464 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhkzt\" (UniqueName: \"kubernetes.io/projected/eca4ec55-572f-4b80-95a6-8a3a55322ad7-kube-api-access-jhkzt\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.014473 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eca4ec55-572f-4b80-95a6-8a3a55322ad7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.049735 5043 generic.go:334] "Generic (PLEG): container finished" podID="eca4ec55-572f-4b80-95a6-8a3a55322ad7" containerID="8e4ac9b999d631405a63a5187294b771ea92a3fd20e7043429becbb87d3502fe" exitCode=0 Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.049787 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8g2jn" Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.049784 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2jn" event={"ID":"eca4ec55-572f-4b80-95a6-8a3a55322ad7","Type":"ContainerDied","Data":"8e4ac9b999d631405a63a5187294b771ea92a3fd20e7043429becbb87d3502fe"} Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.056158 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8g2jn" event={"ID":"eca4ec55-572f-4b80-95a6-8a3a55322ad7","Type":"ContainerDied","Data":"258a09e033e72d55f18378aac6733e29a6de3e63fa1e1147e8521338867e43c3"} Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.056197 5043 scope.go:117] "RemoveContainer" containerID="8e4ac9b999d631405a63a5187294b771ea92a3fd20e7043429becbb87d3502fe" Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.082025 5043 scope.go:117] "RemoveContainer" containerID="26722dc2d6ff345e80b09ef30498aff03c84adb78995d5d5a08b2f97a4bda7c7" Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.097150 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8g2jn"] Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.108505 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8g2jn"] Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.123332 5043 scope.go:117] "RemoveContainer" containerID="0c803ced144cf9237638a17a7ea220d0f13c39b03c94d1439cdc8afd5e6418b3" Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.161219 5043 scope.go:117] "RemoveContainer" containerID="8e4ac9b999d631405a63a5187294b771ea92a3fd20e7043429becbb87d3502fe" Nov 25 08:30:30 crc kubenswrapper[5043]: E1125 08:30:30.162350 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e4ac9b999d631405a63a5187294b771ea92a3fd20e7043429becbb87d3502fe\": container with ID starting with 8e4ac9b999d631405a63a5187294b771ea92a3fd20e7043429becbb87d3502fe not found: ID does not exist" containerID="8e4ac9b999d631405a63a5187294b771ea92a3fd20e7043429becbb87d3502fe" Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.162389 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e4ac9b999d631405a63a5187294b771ea92a3fd20e7043429becbb87d3502fe"} err="failed to get container status \"8e4ac9b999d631405a63a5187294b771ea92a3fd20e7043429becbb87d3502fe\": rpc error: code = NotFound desc = could not find container \"8e4ac9b999d631405a63a5187294b771ea92a3fd20e7043429becbb87d3502fe\": container with ID starting with 8e4ac9b999d631405a63a5187294b771ea92a3fd20e7043429becbb87d3502fe not found: ID does not exist" Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.162413 5043 scope.go:117] "RemoveContainer" containerID="26722dc2d6ff345e80b09ef30498aff03c84adb78995d5d5a08b2f97a4bda7c7" Nov 25 08:30:30 crc kubenswrapper[5043]: E1125 08:30:30.163029 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26722dc2d6ff345e80b09ef30498aff03c84adb78995d5d5a08b2f97a4bda7c7\": container with ID starting with 26722dc2d6ff345e80b09ef30498aff03c84adb78995d5d5a08b2f97a4bda7c7 not found: ID does not exist" containerID="26722dc2d6ff345e80b09ef30498aff03c84adb78995d5d5a08b2f97a4bda7c7" Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.163047 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26722dc2d6ff345e80b09ef30498aff03c84adb78995d5d5a08b2f97a4bda7c7"} err="failed to get container status \"26722dc2d6ff345e80b09ef30498aff03c84adb78995d5d5a08b2f97a4bda7c7\": rpc error: code = NotFound desc = could not find container \"26722dc2d6ff345e80b09ef30498aff03c84adb78995d5d5a08b2f97a4bda7c7\": container with ID starting with 26722dc2d6ff345e80b09ef30498aff03c84adb78995d5d5a08b2f97a4bda7c7 not found: ID does not exist" Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.163060 5043 scope.go:117] "RemoveContainer" containerID="0c803ced144cf9237638a17a7ea220d0f13c39b03c94d1439cdc8afd5e6418b3" Nov 25 08:30:30 crc kubenswrapper[5043]: E1125 08:30:30.163527 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c803ced144cf9237638a17a7ea220d0f13c39b03c94d1439cdc8afd5e6418b3\": container with ID starting with 0c803ced144cf9237638a17a7ea220d0f13c39b03c94d1439cdc8afd5e6418b3 not found: ID does not exist" containerID="0c803ced144cf9237638a17a7ea220d0f13c39b03c94d1439cdc8afd5e6418b3" Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.163581 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c803ced144cf9237638a17a7ea220d0f13c39b03c94d1439cdc8afd5e6418b3"} err="failed to get container status \"0c803ced144cf9237638a17a7ea220d0f13c39b03c94d1439cdc8afd5e6418b3\": rpc error: code = NotFound desc = could not find container \"0c803ced144cf9237638a17a7ea220d0f13c39b03c94d1439cdc8afd5e6418b3\": container with ID starting with 0c803ced144cf9237638a17a7ea220d0f13c39b03c94d1439cdc8afd5e6418b3 not found: ID does not exist" Nov 25 08:30:30 crc kubenswrapper[5043]: I1125 08:30:30.991894 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eca4ec55-572f-4b80-95a6-8a3a55322ad7" path="/var/lib/kubelet/pods/eca4ec55-572f-4b80-95a6-8a3a55322ad7/volumes" Nov 25 08:30:40 crc kubenswrapper[5043]: I1125 08:30:40.966666 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:30:40 crc kubenswrapper[5043]: E1125 08:30:40.967751 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:30:53 crc kubenswrapper[5043]: I1125 08:30:53.602298 5043 scope.go:117] "RemoveContainer" containerID="ffc30b9cde6292de7dda31d8ad43cf4345dbb445b433617c12fb76c451f28a56" Nov 25 08:30:53 crc kubenswrapper[5043]: I1125 08:30:53.963510 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:30:53 crc kubenswrapper[5043]: E1125 08:30:53.963867 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:31:05 crc kubenswrapper[5043]: I1125 08:31:05.963722 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:31:05 crc kubenswrapper[5043]: E1125 08:31:05.964803 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:31:19 crc kubenswrapper[5043]: I1125 08:31:19.963862 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:31:19 crc kubenswrapper[5043]: E1125 08:31:19.964870 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:31:31 crc kubenswrapper[5043]: I1125 08:31:31.963121 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:31:31 crc kubenswrapper[5043]: E1125 08:31:31.964786 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:31:46 crc kubenswrapper[5043]: I1125 08:31:46.969957 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:31:46 crc kubenswrapper[5043]: E1125 08:31:46.971416 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:32:00 crc kubenswrapper[5043]: I1125 08:32:00.963948 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:32:00 crc kubenswrapper[5043]: E1125 08:32:00.964735 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:32:11 crc kubenswrapper[5043]: I1125 08:32:11.962525 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:32:11 crc kubenswrapper[5043]: E1125 08:32:11.963132 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:32:20 crc kubenswrapper[5043]: I1125 08:32:20.454230 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t9gtv"] Nov 25 08:32:20 crc kubenswrapper[5043]: E1125 08:32:20.455243 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca4ec55-572f-4b80-95a6-8a3a55322ad7" containerName="extract-utilities" Nov 25 08:32:20 crc kubenswrapper[5043]: I1125 08:32:20.455260 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca4ec55-572f-4b80-95a6-8a3a55322ad7" containerName="extract-utilities" Nov 25 08:32:20 crc kubenswrapper[5043]: E1125 08:32:20.455276 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca4ec55-572f-4b80-95a6-8a3a55322ad7" containerName="extract-content" Nov 25 08:32:20 crc kubenswrapper[5043]: I1125 08:32:20.455285 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca4ec55-572f-4b80-95a6-8a3a55322ad7" containerName="extract-content" Nov 25 08:32:20 crc kubenswrapper[5043]: E1125 08:32:20.455297 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca4ec55-572f-4b80-95a6-8a3a55322ad7" containerName="registry-server" Nov 25 08:32:20 crc kubenswrapper[5043]: I1125 08:32:20.455305 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca4ec55-572f-4b80-95a6-8a3a55322ad7" containerName="registry-server" Nov 25 08:32:20 crc kubenswrapper[5043]: I1125 08:32:20.455537 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="eca4ec55-572f-4b80-95a6-8a3a55322ad7" containerName="registry-server" Nov 25 08:32:20 crc kubenswrapper[5043]: I1125 08:32:20.457315 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9gtv" Nov 25 08:32:20 crc kubenswrapper[5043]: I1125 08:32:20.482779 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t9gtv"] Nov 25 08:32:20 crc kubenswrapper[5043]: I1125 08:32:20.556151 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527ddee2-86b1-4314-b48b-ae437a62887e-catalog-content\") pod \"redhat-operators-t9gtv\" (UID: \"527ddee2-86b1-4314-b48b-ae437a62887e\") " pod="openshift-marketplace/redhat-operators-t9gtv" Nov 25 08:32:20 crc kubenswrapper[5043]: I1125 08:32:20.556236 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqh8p\" (UniqueName: \"kubernetes.io/projected/527ddee2-86b1-4314-b48b-ae437a62887e-kube-api-access-cqh8p\") pod \"redhat-operators-t9gtv\" (UID: \"527ddee2-86b1-4314-b48b-ae437a62887e\") " pod="openshift-marketplace/redhat-operators-t9gtv" Nov 25 08:32:20 crc kubenswrapper[5043]: I1125 08:32:20.556416 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527ddee2-86b1-4314-b48b-ae437a62887e-utilities\") pod \"redhat-operators-t9gtv\" (UID: \"527ddee2-86b1-4314-b48b-ae437a62887e\") " pod="openshift-marketplace/redhat-operators-t9gtv" Nov 25 08:32:20 crc kubenswrapper[5043]: I1125 08:32:20.669191 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527ddee2-86b1-4314-b48b-ae437a62887e-catalog-content\") pod \"redhat-operators-t9gtv\" (UID: \"527ddee2-86b1-4314-b48b-ae437a62887e\") " pod="openshift-marketplace/redhat-operators-t9gtv" Nov 25 08:32:20 crc kubenswrapper[5043]: I1125 08:32:20.669274 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqh8p\" (UniqueName: \"kubernetes.io/projected/527ddee2-86b1-4314-b48b-ae437a62887e-kube-api-access-cqh8p\") pod \"redhat-operators-t9gtv\" (UID: \"527ddee2-86b1-4314-b48b-ae437a62887e\") " pod="openshift-marketplace/redhat-operators-t9gtv" Nov 25 08:32:20 crc kubenswrapper[5043]: I1125 08:32:20.669361 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527ddee2-86b1-4314-b48b-ae437a62887e-utilities\") pod \"redhat-operators-t9gtv\" (UID: \"527ddee2-86b1-4314-b48b-ae437a62887e\") " pod="openshift-marketplace/redhat-operators-t9gtv" Nov 25 08:32:20 crc kubenswrapper[5043]: I1125 08:32:20.669768 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527ddee2-86b1-4314-b48b-ae437a62887e-utilities\") pod \"redhat-operators-t9gtv\" (UID: \"527ddee2-86b1-4314-b48b-ae437a62887e\") " pod="openshift-marketplace/redhat-operators-t9gtv" Nov 25 08:32:20 crc kubenswrapper[5043]: I1125 08:32:20.670029 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527ddee2-86b1-4314-b48b-ae437a62887e-catalog-content\") pod \"redhat-operators-t9gtv\" (UID: \"527ddee2-86b1-4314-b48b-ae437a62887e\") " pod="openshift-marketplace/redhat-operators-t9gtv" Nov 25 08:32:20 crc kubenswrapper[5043]: I1125 08:32:20.692593 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqh8p\" (UniqueName: \"kubernetes.io/projected/527ddee2-86b1-4314-b48b-ae437a62887e-kube-api-access-cqh8p\") pod \"redhat-operators-t9gtv\" (UID: \"527ddee2-86b1-4314-b48b-ae437a62887e\") " pod="openshift-marketplace/redhat-operators-t9gtv" Nov 25 08:32:20 crc kubenswrapper[5043]: I1125 08:32:20.788555 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9gtv" Nov 25 08:32:21 crc kubenswrapper[5043]: I1125 08:32:21.300628 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t9gtv"] Nov 25 08:32:22 crc kubenswrapper[5043]: I1125 08:32:22.057101 5043 generic.go:334] "Generic (PLEG): container finished" podID="527ddee2-86b1-4314-b48b-ae437a62887e" containerID="4996579688d5934d281a614bebe6d0f30fe4c756fb41543e80d072a922041971" exitCode=0 Nov 25 08:32:22 crc kubenswrapper[5043]: I1125 08:32:22.057156 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9gtv" event={"ID":"527ddee2-86b1-4314-b48b-ae437a62887e","Type":"ContainerDied","Data":"4996579688d5934d281a614bebe6d0f30fe4c756fb41543e80d072a922041971"} Nov 25 08:32:22 crc kubenswrapper[5043]: I1125 08:32:22.057404 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9gtv" event={"ID":"527ddee2-86b1-4314-b48b-ae437a62887e","Type":"ContainerStarted","Data":"b8ed54cdd32d5741186ecb1a3f64b889ef9cefc5a0b736e44f3c60b11f378dec"} Nov 25 08:32:22 crc kubenswrapper[5043]: I1125 08:32:22.059660 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:32:23 crc kubenswrapper[5043]: I1125 08:32:23.962757 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:32:24 crc kubenswrapper[5043]: I1125 08:32:24.083458 5043 generic.go:334] "Generic (PLEG): container finished" podID="527ddee2-86b1-4314-b48b-ae437a62887e" containerID="9e262b49259e42ae7bd6382ccb729541d55565c8c621ff06988f5c1dcf4bd257" exitCode=0 Nov 25 08:32:24 crc kubenswrapper[5043]: I1125 08:32:24.083571 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9gtv" event={"ID":"527ddee2-86b1-4314-b48b-ae437a62887e","Type":"ContainerDied","Data":"9e262b49259e42ae7bd6382ccb729541d55565c8c621ff06988f5c1dcf4bd257"} Nov 25 08:32:26 crc kubenswrapper[5043]: I1125 08:32:26.105139 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"7debc521cf0d1b4377e91cdb36a8856875226c6572dfb0a6d374f4af3231143d"} Nov 25 08:32:28 crc kubenswrapper[5043]: I1125 08:32:28.125813 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9gtv" event={"ID":"527ddee2-86b1-4314-b48b-ae437a62887e","Type":"ContainerStarted","Data":"802b3e30c189aec8427c3649b02507a4e7c02c61e8af840570e84ce35851682d"} Nov 25 08:32:28 crc kubenswrapper[5043]: I1125 08:32:28.152022 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t9gtv" podStartSLOduration=3.160572968 podStartE2EDuration="8.151998227s" podCreationTimestamp="2025-11-25 08:32:20 +0000 UTC" firstStartedPulling="2025-11-25 08:32:22.059350289 +0000 UTC m=+4606.227546010" lastFinishedPulling="2025-11-25 08:32:27.050775548 +0000 UTC m=+4611.218971269" observedRunningTime="2025-11-25 08:32:28.149053588 +0000 UTC m=+4612.317249329" watchObservedRunningTime="2025-11-25 08:32:28.151998227 +0000 UTC m=+4612.320193958" Nov 25 08:32:30 crc kubenswrapper[5043]: I1125 08:32:30.789666 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t9gtv" Nov 25 08:32:30 crc kubenswrapper[5043]: I1125 08:32:30.790206 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t9gtv" Nov 25 08:32:31 crc kubenswrapper[5043]: I1125 08:32:31.866272 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t9gtv" podUID="527ddee2-86b1-4314-b48b-ae437a62887e" containerName="registry-server" probeResult="failure" output=< Nov 25 08:32:31 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 08:32:31 crc kubenswrapper[5043]: > Nov 25 08:32:40 crc kubenswrapper[5043]: I1125 08:32:40.838457 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t9gtv" Nov 25 08:32:40 crc kubenswrapper[5043]: I1125 08:32:40.901443 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t9gtv" Nov 25 08:32:41 crc kubenswrapper[5043]: I1125 08:32:41.072899 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t9gtv"] Nov 25 08:32:42 crc kubenswrapper[5043]: I1125 08:32:42.246545 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t9gtv" podUID="527ddee2-86b1-4314-b48b-ae437a62887e" containerName="registry-server" containerID="cri-o://802b3e30c189aec8427c3649b02507a4e7c02c61e8af840570e84ce35851682d" gracePeriod=2 Nov 25 08:32:42 crc kubenswrapper[5043]: I1125 08:32:42.946327 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9gtv" Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.109005 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527ddee2-86b1-4314-b48b-ae437a62887e-catalog-content\") pod \"527ddee2-86b1-4314-b48b-ae437a62887e\" (UID: \"527ddee2-86b1-4314-b48b-ae437a62887e\") " Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.109078 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527ddee2-86b1-4314-b48b-ae437a62887e-utilities\") pod \"527ddee2-86b1-4314-b48b-ae437a62887e\" (UID: \"527ddee2-86b1-4314-b48b-ae437a62887e\") " Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.109230 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqh8p\" (UniqueName: \"kubernetes.io/projected/527ddee2-86b1-4314-b48b-ae437a62887e-kube-api-access-cqh8p\") pod \"527ddee2-86b1-4314-b48b-ae437a62887e\" (UID: \"527ddee2-86b1-4314-b48b-ae437a62887e\") " Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.110998 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/527ddee2-86b1-4314-b48b-ae437a62887e-utilities" (OuterVolumeSpecName: "utilities") pod "527ddee2-86b1-4314-b48b-ae437a62887e" (UID: "527ddee2-86b1-4314-b48b-ae437a62887e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.117454 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/527ddee2-86b1-4314-b48b-ae437a62887e-kube-api-access-cqh8p" (OuterVolumeSpecName: "kube-api-access-cqh8p") pod "527ddee2-86b1-4314-b48b-ae437a62887e" (UID: "527ddee2-86b1-4314-b48b-ae437a62887e"). InnerVolumeSpecName "kube-api-access-cqh8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.201021 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/527ddee2-86b1-4314-b48b-ae437a62887e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "527ddee2-86b1-4314-b48b-ae437a62887e" (UID: "527ddee2-86b1-4314-b48b-ae437a62887e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.211386 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527ddee2-86b1-4314-b48b-ae437a62887e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.211423 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527ddee2-86b1-4314-b48b-ae437a62887e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.211433 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqh8p\" (UniqueName: \"kubernetes.io/projected/527ddee2-86b1-4314-b48b-ae437a62887e-kube-api-access-cqh8p\") on node \"crc\" DevicePath \"\"" Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.257616 5043 generic.go:334] "Generic (PLEG): container finished" podID="527ddee2-86b1-4314-b48b-ae437a62887e" containerID="802b3e30c189aec8427c3649b02507a4e7c02c61e8af840570e84ce35851682d" exitCode=0 Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.257666 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9gtv" event={"ID":"527ddee2-86b1-4314-b48b-ae437a62887e","Type":"ContainerDied","Data":"802b3e30c189aec8427c3649b02507a4e7c02c61e8af840570e84ce35851682d"} Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.257698 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9gtv" event={"ID":"527ddee2-86b1-4314-b48b-ae437a62887e","Type":"ContainerDied","Data":"b8ed54cdd32d5741186ecb1a3f64b889ef9cefc5a0b736e44f3c60b11f378dec"} Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.257718 5043 scope.go:117] "RemoveContainer" containerID="802b3e30c189aec8427c3649b02507a4e7c02c61e8af840570e84ce35851682d" Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.257724 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9gtv" Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.277356 5043 scope.go:117] "RemoveContainer" containerID="9e262b49259e42ae7bd6382ccb729541d55565c8c621ff06988f5c1dcf4bd257" Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.291248 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t9gtv"] Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.300811 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t9gtv"] Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.312099 5043 scope.go:117] "RemoveContainer" containerID="4996579688d5934d281a614bebe6d0f30fe4c756fb41543e80d072a922041971" Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.343423 5043 scope.go:117] "RemoveContainer" containerID="802b3e30c189aec8427c3649b02507a4e7c02c61e8af840570e84ce35851682d" Nov 25 08:32:43 crc kubenswrapper[5043]: E1125 08:32:43.343802 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"802b3e30c189aec8427c3649b02507a4e7c02c61e8af840570e84ce35851682d\": container with ID starting with 802b3e30c189aec8427c3649b02507a4e7c02c61e8af840570e84ce35851682d not found: ID does not exist" containerID="802b3e30c189aec8427c3649b02507a4e7c02c61e8af840570e84ce35851682d" Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.343867 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"802b3e30c189aec8427c3649b02507a4e7c02c61e8af840570e84ce35851682d"} err="failed to get container status \"802b3e30c189aec8427c3649b02507a4e7c02c61e8af840570e84ce35851682d\": rpc error: code = NotFound desc = could not find container \"802b3e30c189aec8427c3649b02507a4e7c02c61e8af840570e84ce35851682d\": container with ID starting with 802b3e30c189aec8427c3649b02507a4e7c02c61e8af840570e84ce35851682d not found: ID does not exist" Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.343896 5043 scope.go:117] "RemoveContainer" containerID="9e262b49259e42ae7bd6382ccb729541d55565c8c621ff06988f5c1dcf4bd257" Nov 25 08:32:43 crc kubenswrapper[5043]: E1125 08:32:43.344319 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e262b49259e42ae7bd6382ccb729541d55565c8c621ff06988f5c1dcf4bd257\": container with ID starting with 9e262b49259e42ae7bd6382ccb729541d55565c8c621ff06988f5c1dcf4bd257 not found: ID does not exist" containerID="9e262b49259e42ae7bd6382ccb729541d55565c8c621ff06988f5c1dcf4bd257" Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.344364 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e262b49259e42ae7bd6382ccb729541d55565c8c621ff06988f5c1dcf4bd257"} err="failed to get container status \"9e262b49259e42ae7bd6382ccb729541d55565c8c621ff06988f5c1dcf4bd257\": rpc error: code = NotFound desc = could not find container \"9e262b49259e42ae7bd6382ccb729541d55565c8c621ff06988f5c1dcf4bd257\": container with ID starting with 9e262b49259e42ae7bd6382ccb729541d55565c8c621ff06988f5c1dcf4bd257 not found: ID does not exist" Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.344390 5043 scope.go:117] "RemoveContainer" containerID="4996579688d5934d281a614bebe6d0f30fe4c756fb41543e80d072a922041971" Nov 25 08:32:43 crc kubenswrapper[5043]: E1125 08:32:43.344749 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4996579688d5934d281a614bebe6d0f30fe4c756fb41543e80d072a922041971\": container with ID starting with 4996579688d5934d281a614bebe6d0f30fe4c756fb41543e80d072a922041971 not found: ID does not exist" containerID="4996579688d5934d281a614bebe6d0f30fe4c756fb41543e80d072a922041971" Nov 25 08:32:43 crc kubenswrapper[5043]: I1125 08:32:43.344775 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4996579688d5934d281a614bebe6d0f30fe4c756fb41543e80d072a922041971"} err="failed to get container status \"4996579688d5934d281a614bebe6d0f30fe4c756fb41543e80d072a922041971\": rpc error: code = NotFound desc = could not find container \"4996579688d5934d281a614bebe6d0f30fe4c756fb41543e80d072a922041971\": container with ID starting with 4996579688d5934d281a614bebe6d0f30fe4c756fb41543e80d072a922041971 not found: ID does not exist" Nov 25 08:32:44 crc kubenswrapper[5043]: I1125 08:32:44.975238 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="527ddee2-86b1-4314-b48b-ae437a62887e" path="/var/lib/kubelet/pods/527ddee2-86b1-4314-b48b-ae437a62887e/volumes" Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.055033 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vw2w2"] Nov 25 08:33:03 crc kubenswrapper[5043]: E1125 08:33:03.056323 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527ddee2-86b1-4314-b48b-ae437a62887e" containerName="extract-utilities" Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.056339 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="527ddee2-86b1-4314-b48b-ae437a62887e" containerName="extract-utilities" Nov 25 08:33:03 crc kubenswrapper[5043]: E1125 08:33:03.056378 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527ddee2-86b1-4314-b48b-ae437a62887e" containerName="extract-content" Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.056387 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="527ddee2-86b1-4314-b48b-ae437a62887e" containerName="extract-content" Nov 25 08:33:03 crc kubenswrapper[5043]: E1125 08:33:03.056407 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527ddee2-86b1-4314-b48b-ae437a62887e" containerName="registry-server" Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.056414 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="527ddee2-86b1-4314-b48b-ae437a62887e" containerName="registry-server" Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.056650 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="527ddee2-86b1-4314-b48b-ae437a62887e" containerName="registry-server" Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.058447 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vw2w2" Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.069518 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vw2w2"] Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.154029 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccfc32e-7fea-4970-aef8-c20a84a64512-utilities\") pod \"community-operators-vw2w2\" (UID: \"bccfc32e-7fea-4970-aef8-c20a84a64512\") " pod="openshift-marketplace/community-operators-vw2w2" Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.154105 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb2zl\" (UniqueName: \"kubernetes.io/projected/bccfc32e-7fea-4970-aef8-c20a84a64512-kube-api-access-pb2zl\") pod \"community-operators-vw2w2\" (UID: \"bccfc32e-7fea-4970-aef8-c20a84a64512\") " pod="openshift-marketplace/community-operators-vw2w2" Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.154180 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccfc32e-7fea-4970-aef8-c20a84a64512-catalog-content\") pod \"community-operators-vw2w2\" (UID: \"bccfc32e-7fea-4970-aef8-c20a84a64512\") " pod="openshift-marketplace/community-operators-vw2w2" Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.255407 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccfc32e-7fea-4970-aef8-c20a84a64512-utilities\") pod \"community-operators-vw2w2\" (UID: \"bccfc32e-7fea-4970-aef8-c20a84a64512\") " pod="openshift-marketplace/community-operators-vw2w2" Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.255471 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb2zl\" (UniqueName: \"kubernetes.io/projected/bccfc32e-7fea-4970-aef8-c20a84a64512-kube-api-access-pb2zl\") pod \"community-operators-vw2w2\" (UID: \"bccfc32e-7fea-4970-aef8-c20a84a64512\") " pod="openshift-marketplace/community-operators-vw2w2" Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.255515 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccfc32e-7fea-4970-aef8-c20a84a64512-catalog-content\") pod \"community-operators-vw2w2\" (UID: \"bccfc32e-7fea-4970-aef8-c20a84a64512\") " pod="openshift-marketplace/community-operators-vw2w2" Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.256117 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccfc32e-7fea-4970-aef8-c20a84a64512-catalog-content\") pod \"community-operators-vw2w2\" (UID: \"bccfc32e-7fea-4970-aef8-c20a84a64512\") " pod="openshift-marketplace/community-operators-vw2w2" Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.256318 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccfc32e-7fea-4970-aef8-c20a84a64512-utilities\") pod \"community-operators-vw2w2\" (UID: \"bccfc32e-7fea-4970-aef8-c20a84a64512\") " pod="openshift-marketplace/community-operators-vw2w2" Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.278825 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb2zl\" (UniqueName: \"kubernetes.io/projected/bccfc32e-7fea-4970-aef8-c20a84a64512-kube-api-access-pb2zl\") pod \"community-operators-vw2w2\" (UID: \"bccfc32e-7fea-4970-aef8-c20a84a64512\") " pod="openshift-marketplace/community-operators-vw2w2" Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.378174 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vw2w2" Nov 25 08:33:03 crc kubenswrapper[5043]: I1125 08:33:03.980803 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vw2w2"] Nov 25 08:33:04 crc kubenswrapper[5043]: I1125 08:33:04.449128 5043 generic.go:334] "Generic (PLEG): container finished" podID="bccfc32e-7fea-4970-aef8-c20a84a64512" containerID="0da9cd570dbd05fbedb811197b53f7a33d0f6052c8321d6dc9385c5f22b5f04a" exitCode=0 Nov 25 08:33:04 crc kubenswrapper[5043]: I1125 08:33:04.449476 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vw2w2" event={"ID":"bccfc32e-7fea-4970-aef8-c20a84a64512","Type":"ContainerDied","Data":"0da9cd570dbd05fbedb811197b53f7a33d0f6052c8321d6dc9385c5f22b5f04a"} Nov 25 08:33:04 crc kubenswrapper[5043]: I1125 08:33:04.449520 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vw2w2" event={"ID":"bccfc32e-7fea-4970-aef8-c20a84a64512","Type":"ContainerStarted","Data":"bb99c83392a85e7bce2e6b3e732a768dbc6c501c21b203b270a04796f006ec77"} Nov 25 08:33:06 crc kubenswrapper[5043]: I1125 08:33:06.466891 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vw2w2" event={"ID":"bccfc32e-7fea-4970-aef8-c20a84a64512","Type":"ContainerStarted","Data":"3f09cb35b8c96d8b1b5edb093cc737ca227bf76e6c5dbaa510c97cd34021d0dc"} Nov 25 08:33:07 crc kubenswrapper[5043]: I1125 08:33:07.480481 5043 generic.go:334] "Generic (PLEG): container finished" podID="bccfc32e-7fea-4970-aef8-c20a84a64512" containerID="3f09cb35b8c96d8b1b5edb093cc737ca227bf76e6c5dbaa510c97cd34021d0dc" exitCode=0 Nov 25 08:33:07 crc kubenswrapper[5043]: I1125 08:33:07.480822 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vw2w2" event={"ID":"bccfc32e-7fea-4970-aef8-c20a84a64512","Type":"ContainerDied","Data":"3f09cb35b8c96d8b1b5edb093cc737ca227bf76e6c5dbaa510c97cd34021d0dc"} Nov 25 08:33:08 crc kubenswrapper[5043]: I1125 08:33:08.492619 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vw2w2" event={"ID":"bccfc32e-7fea-4970-aef8-c20a84a64512","Type":"ContainerStarted","Data":"99d9862acf9ebb43a1e1f646a629df9df62399fd2ca35f2aacd2de58324e94af"} Nov 25 08:33:08 crc kubenswrapper[5043]: I1125 08:33:08.517695 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vw2w2" podStartSLOduration=1.9981366249999999 podStartE2EDuration="5.517671894s" podCreationTimestamp="2025-11-25 08:33:03 +0000 UTC" firstStartedPulling="2025-11-25 08:33:04.451273936 +0000 UTC m=+4648.619469657" lastFinishedPulling="2025-11-25 08:33:07.970809175 +0000 UTC m=+4652.139004926" observedRunningTime="2025-11-25 08:33:08.515081124 +0000 UTC m=+4652.683276875" watchObservedRunningTime="2025-11-25 08:33:08.517671894 +0000 UTC m=+4652.685867615" Nov 25 08:33:13 crc kubenswrapper[5043]: I1125 08:33:13.379112 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vw2w2" Nov 25 08:33:13 crc kubenswrapper[5043]: I1125 08:33:13.379685 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vw2w2" Nov 25 08:33:13 crc kubenswrapper[5043]: I1125 08:33:13.430777 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vw2w2" Nov 25 08:33:13 crc kubenswrapper[5043]: I1125 08:33:13.589164 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vw2w2" Nov 25 08:33:13 crc kubenswrapper[5043]: I1125 08:33:13.665169 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vw2w2"] Nov 25 08:33:15 crc kubenswrapper[5043]: I1125 08:33:15.556462 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vw2w2" podUID="bccfc32e-7fea-4970-aef8-c20a84a64512" containerName="registry-server" containerID="cri-o://99d9862acf9ebb43a1e1f646a629df9df62399fd2ca35f2aacd2de58324e94af" gracePeriod=2 Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.323317 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vw2w2" Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.422817 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccfc32e-7fea-4970-aef8-c20a84a64512-catalog-content\") pod \"bccfc32e-7fea-4970-aef8-c20a84a64512\" (UID: \"bccfc32e-7fea-4970-aef8-c20a84a64512\") " Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.422865 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccfc32e-7fea-4970-aef8-c20a84a64512-utilities\") pod \"bccfc32e-7fea-4970-aef8-c20a84a64512\" (UID: \"bccfc32e-7fea-4970-aef8-c20a84a64512\") " Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.422921 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb2zl\" (UniqueName: \"kubernetes.io/projected/bccfc32e-7fea-4970-aef8-c20a84a64512-kube-api-access-pb2zl\") pod \"bccfc32e-7fea-4970-aef8-c20a84a64512\" (UID: \"bccfc32e-7fea-4970-aef8-c20a84a64512\") " Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.426846 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bccfc32e-7fea-4970-aef8-c20a84a64512-utilities" (OuterVolumeSpecName: "utilities") pod "bccfc32e-7fea-4970-aef8-c20a84a64512" (UID: "bccfc32e-7fea-4970-aef8-c20a84a64512"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.431970 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bccfc32e-7fea-4970-aef8-c20a84a64512-kube-api-access-pb2zl" (OuterVolumeSpecName: "kube-api-access-pb2zl") pod "bccfc32e-7fea-4970-aef8-c20a84a64512" (UID: "bccfc32e-7fea-4970-aef8-c20a84a64512"). InnerVolumeSpecName "kube-api-access-pb2zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.509811 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bccfc32e-7fea-4970-aef8-c20a84a64512-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bccfc32e-7fea-4970-aef8-c20a84a64512" (UID: "bccfc32e-7fea-4970-aef8-c20a84a64512"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.525188 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccfc32e-7fea-4970-aef8-c20a84a64512-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.525230 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccfc32e-7fea-4970-aef8-c20a84a64512-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.525244 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb2zl\" (UniqueName: \"kubernetes.io/projected/bccfc32e-7fea-4970-aef8-c20a84a64512-kube-api-access-pb2zl\") on node \"crc\" DevicePath \"\"" Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.568335 5043 generic.go:334] "Generic (PLEG): container finished" podID="bccfc32e-7fea-4970-aef8-c20a84a64512" containerID="99d9862acf9ebb43a1e1f646a629df9df62399fd2ca35f2aacd2de58324e94af" exitCode=0 Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.568377 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vw2w2" event={"ID":"bccfc32e-7fea-4970-aef8-c20a84a64512","Type":"ContainerDied","Data":"99d9862acf9ebb43a1e1f646a629df9df62399fd2ca35f2aacd2de58324e94af"} Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.568390 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vw2w2" Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.568403 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vw2w2" event={"ID":"bccfc32e-7fea-4970-aef8-c20a84a64512","Type":"ContainerDied","Data":"bb99c83392a85e7bce2e6b3e732a768dbc6c501c21b203b270a04796f006ec77"} Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.568422 5043 scope.go:117] "RemoveContainer" containerID="99d9862acf9ebb43a1e1f646a629df9df62399fd2ca35f2aacd2de58324e94af" Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.600196 5043 scope.go:117] "RemoveContainer" containerID="3f09cb35b8c96d8b1b5edb093cc737ca227bf76e6c5dbaa510c97cd34021d0dc" Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.602596 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vw2w2"] Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.612106 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vw2w2"] Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.627642 5043 scope.go:117] "RemoveContainer" containerID="0da9cd570dbd05fbedb811197b53f7a33d0f6052c8321d6dc9385c5f22b5f04a" Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.684536 5043 scope.go:117] "RemoveContainer" containerID="99d9862acf9ebb43a1e1f646a629df9df62399fd2ca35f2aacd2de58324e94af" Nov 25 08:33:16 crc kubenswrapper[5043]: E1125 08:33:16.689447 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99d9862acf9ebb43a1e1f646a629df9df62399fd2ca35f2aacd2de58324e94af\": container with ID starting with 99d9862acf9ebb43a1e1f646a629df9df62399fd2ca35f2aacd2de58324e94af not found: ID does not exist" containerID="99d9862acf9ebb43a1e1f646a629df9df62399fd2ca35f2aacd2de58324e94af" Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.689506 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99d9862acf9ebb43a1e1f646a629df9df62399fd2ca35f2aacd2de58324e94af"} err="failed to get container status \"99d9862acf9ebb43a1e1f646a629df9df62399fd2ca35f2aacd2de58324e94af\": rpc error: code = NotFound desc = could not find container \"99d9862acf9ebb43a1e1f646a629df9df62399fd2ca35f2aacd2de58324e94af\": container with ID starting with 99d9862acf9ebb43a1e1f646a629df9df62399fd2ca35f2aacd2de58324e94af not found: ID does not exist" Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.689540 5043 scope.go:117] "RemoveContainer" containerID="3f09cb35b8c96d8b1b5edb093cc737ca227bf76e6c5dbaa510c97cd34021d0dc" Nov 25 08:33:16 crc kubenswrapper[5043]: E1125 08:33:16.689903 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f09cb35b8c96d8b1b5edb093cc737ca227bf76e6c5dbaa510c97cd34021d0dc\": container with ID starting with 3f09cb35b8c96d8b1b5edb093cc737ca227bf76e6c5dbaa510c97cd34021d0dc not found: ID does not exist" containerID="3f09cb35b8c96d8b1b5edb093cc737ca227bf76e6c5dbaa510c97cd34021d0dc" Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.689932 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f09cb35b8c96d8b1b5edb093cc737ca227bf76e6c5dbaa510c97cd34021d0dc"} err="failed to get container status \"3f09cb35b8c96d8b1b5edb093cc737ca227bf76e6c5dbaa510c97cd34021d0dc\": rpc error: code = NotFound desc = could not find container \"3f09cb35b8c96d8b1b5edb093cc737ca227bf76e6c5dbaa510c97cd34021d0dc\": container with ID starting with 3f09cb35b8c96d8b1b5edb093cc737ca227bf76e6c5dbaa510c97cd34021d0dc not found: ID does not exist" Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.689951 5043 scope.go:117] "RemoveContainer" containerID="0da9cd570dbd05fbedb811197b53f7a33d0f6052c8321d6dc9385c5f22b5f04a" Nov 25 08:33:16 crc kubenswrapper[5043]: E1125 08:33:16.690403 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0da9cd570dbd05fbedb811197b53f7a33d0f6052c8321d6dc9385c5f22b5f04a\": container with ID starting with 0da9cd570dbd05fbedb811197b53f7a33d0f6052c8321d6dc9385c5f22b5f04a not found: ID does not exist" containerID="0da9cd570dbd05fbedb811197b53f7a33d0f6052c8321d6dc9385c5f22b5f04a" Nov 25 08:33:16 crc kubenswrapper[5043]: I1125 08:33:16.690461 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0da9cd570dbd05fbedb811197b53f7a33d0f6052c8321d6dc9385c5f22b5f04a"} err="failed to get container status \"0da9cd570dbd05fbedb811197b53f7a33d0f6052c8321d6dc9385c5f22b5f04a\": rpc error: code = NotFound desc = could not find container \"0da9cd570dbd05fbedb811197b53f7a33d0f6052c8321d6dc9385c5f22b5f04a\": container with ID starting with 0da9cd570dbd05fbedb811197b53f7a33d0f6052c8321d6dc9385c5f22b5f04a not found: ID does not exist" Nov 25 08:33:18 crc kubenswrapper[5043]: I1125 08:33:18.132762 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bccfc32e-7fea-4970-aef8-c20a84a64512" path="/var/lib/kubelet/pods/bccfc32e-7fea-4970-aef8-c20a84a64512/volumes" Nov 25 08:34:47 crc kubenswrapper[5043]: I1125 08:34:47.275838 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:34:47 crc kubenswrapper[5043]: I1125 08:34:47.276405 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:35:17 crc kubenswrapper[5043]: I1125 08:35:17.276741 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:35:17 crc kubenswrapper[5043]: I1125 08:35:17.277303 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:35:47 crc kubenswrapper[5043]: I1125 08:35:47.276123 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:35:47 crc kubenswrapper[5043]: I1125 08:35:47.276669 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:35:47 crc kubenswrapper[5043]: I1125 08:35:47.276722 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 08:35:47 crc kubenswrapper[5043]: I1125 08:35:47.277405 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7debc521cf0d1b4377e91cdb36a8856875226c6572dfb0a6d374f4af3231143d"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:35:47 crc kubenswrapper[5043]: I1125 08:35:47.277448 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://7debc521cf0d1b4377e91cdb36a8856875226c6572dfb0a6d374f4af3231143d" gracePeriod=600 Nov 25 08:35:47 crc kubenswrapper[5043]: I1125 08:35:47.448849 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="7debc521cf0d1b4377e91cdb36a8856875226c6572dfb0a6d374f4af3231143d" exitCode=0 Nov 25 08:35:47 crc kubenswrapper[5043]: I1125 08:35:47.448924 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"7debc521cf0d1b4377e91cdb36a8856875226c6572dfb0a6d374f4af3231143d"} Nov 25 08:35:47 crc kubenswrapper[5043]: I1125 08:35:47.449291 5043 scope.go:117] "RemoveContainer" containerID="13e90a48695d69299caced6d442dac969a4ddd344973bf8cd66c86c7e868e548" Nov 25 08:35:48 crc kubenswrapper[5043]: I1125 08:35:48.459320 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1"} Nov 25 08:37:47 crc kubenswrapper[5043]: I1125 08:37:47.276624 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:37:47 crc kubenswrapper[5043]: I1125 08:37:47.277153 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:37:50 crc kubenswrapper[5043]: I1125 08:37:50.193705 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tdcqr"] Nov 25 08:37:50 crc kubenswrapper[5043]: E1125 08:37:50.194379 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccfc32e-7fea-4970-aef8-c20a84a64512" containerName="extract-content" Nov 25 08:37:50 crc kubenswrapper[5043]: I1125 08:37:50.194391 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccfc32e-7fea-4970-aef8-c20a84a64512" containerName="extract-content" Nov 25 08:37:50 crc kubenswrapper[5043]: E1125 08:37:50.194413 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccfc32e-7fea-4970-aef8-c20a84a64512" containerName="extract-utilities" Nov 25 08:37:50 crc kubenswrapper[5043]: I1125 08:37:50.194419 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccfc32e-7fea-4970-aef8-c20a84a64512" containerName="extract-utilities" Nov 25 08:37:50 crc kubenswrapper[5043]: E1125 08:37:50.194436 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccfc32e-7fea-4970-aef8-c20a84a64512" containerName="registry-server" Nov 25 08:37:50 crc kubenswrapper[5043]: I1125 08:37:50.194443 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccfc32e-7fea-4970-aef8-c20a84a64512" containerName="registry-server" Nov 25 08:37:50 crc kubenswrapper[5043]: I1125 08:37:50.194642 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="bccfc32e-7fea-4970-aef8-c20a84a64512" containerName="registry-server" Nov 25 08:37:50 crc kubenswrapper[5043]: I1125 08:37:50.195897 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tdcqr" Nov 25 08:37:50 crc kubenswrapper[5043]: I1125 08:37:50.242771 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tdcqr"] Nov 25 08:37:50 crc kubenswrapper[5043]: I1125 08:37:50.285970 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a75b8f7-9163-442a-b480-69ec985e541d-utilities\") pod \"redhat-marketplace-tdcqr\" (UID: \"4a75b8f7-9163-442a-b480-69ec985e541d\") " pod="openshift-marketplace/redhat-marketplace-tdcqr" Nov 25 08:37:50 crc kubenswrapper[5043]: I1125 08:37:50.286067 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a75b8f7-9163-442a-b480-69ec985e541d-catalog-content\") pod \"redhat-marketplace-tdcqr\" (UID: \"4a75b8f7-9163-442a-b480-69ec985e541d\") " pod="openshift-marketplace/redhat-marketplace-tdcqr" Nov 25 08:37:50 crc kubenswrapper[5043]: I1125 08:37:50.286116 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scckc\" (UniqueName: \"kubernetes.io/projected/4a75b8f7-9163-442a-b480-69ec985e541d-kube-api-access-scckc\") pod \"redhat-marketplace-tdcqr\" (UID: \"4a75b8f7-9163-442a-b480-69ec985e541d\") " pod="openshift-marketplace/redhat-marketplace-tdcqr" Nov 25 08:37:50 crc kubenswrapper[5043]: I1125 08:37:50.387977 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a75b8f7-9163-442a-b480-69ec985e541d-catalog-content\") pod \"redhat-marketplace-tdcqr\" (UID: \"4a75b8f7-9163-442a-b480-69ec985e541d\") " pod="openshift-marketplace/redhat-marketplace-tdcqr" Nov 25 08:37:50 crc kubenswrapper[5043]: I1125 08:37:50.388046 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scckc\" (UniqueName: \"kubernetes.io/projected/4a75b8f7-9163-442a-b480-69ec985e541d-kube-api-access-scckc\") pod \"redhat-marketplace-tdcqr\" (UID: \"4a75b8f7-9163-442a-b480-69ec985e541d\") " pod="openshift-marketplace/redhat-marketplace-tdcqr" Nov 25 08:37:50 crc kubenswrapper[5043]: I1125 08:37:50.388225 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a75b8f7-9163-442a-b480-69ec985e541d-utilities\") pod \"redhat-marketplace-tdcqr\" (UID: \"4a75b8f7-9163-442a-b480-69ec985e541d\") " pod="openshift-marketplace/redhat-marketplace-tdcqr" Nov 25 08:37:50 crc kubenswrapper[5043]: I1125 08:37:50.388959 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a75b8f7-9163-442a-b480-69ec985e541d-catalog-content\") pod \"redhat-marketplace-tdcqr\" (UID: \"4a75b8f7-9163-442a-b480-69ec985e541d\") " pod="openshift-marketplace/redhat-marketplace-tdcqr" Nov 25 08:37:50 crc kubenswrapper[5043]: I1125 08:37:50.389071 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a75b8f7-9163-442a-b480-69ec985e541d-utilities\") pod \"redhat-marketplace-tdcqr\" (UID: \"4a75b8f7-9163-442a-b480-69ec985e541d\") " pod="openshift-marketplace/redhat-marketplace-tdcqr" Nov 25 08:37:50 crc kubenswrapper[5043]: I1125 08:37:50.411131 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scckc\" (UniqueName: \"kubernetes.io/projected/4a75b8f7-9163-442a-b480-69ec985e541d-kube-api-access-scckc\") pod \"redhat-marketplace-tdcqr\" (UID: \"4a75b8f7-9163-442a-b480-69ec985e541d\") " pod="openshift-marketplace/redhat-marketplace-tdcqr" Nov 25 08:37:50 crc kubenswrapper[5043]: I1125 08:37:50.533646 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tdcqr" Nov 25 08:37:51 crc kubenswrapper[5043]: I1125 08:37:51.015297 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tdcqr"] Nov 25 08:37:51 crc kubenswrapper[5043]: I1125 08:37:51.755316 5043 generic.go:334] "Generic (PLEG): container finished" podID="4a75b8f7-9163-442a-b480-69ec985e541d" containerID="880bf90204bbdb461343a50ded83928639a8b79b818e05043d1ee0a7c02f9f2b" exitCode=0 Nov 25 08:37:51 crc kubenswrapper[5043]: I1125 08:37:51.755364 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdcqr" event={"ID":"4a75b8f7-9163-442a-b480-69ec985e541d","Type":"ContainerDied","Data":"880bf90204bbdb461343a50ded83928639a8b79b818e05043d1ee0a7c02f9f2b"} Nov 25 08:37:51 crc kubenswrapper[5043]: I1125 08:37:51.755633 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdcqr" event={"ID":"4a75b8f7-9163-442a-b480-69ec985e541d","Type":"ContainerStarted","Data":"03d1deb48954ed8bdffd72d7c7ad8eac5b17afd3aa3968f4919a299c19d70cc4"} Nov 25 08:37:51 crc kubenswrapper[5043]: I1125 08:37:51.758392 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:37:52 crc kubenswrapper[5043]: I1125 08:37:52.766641 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdcqr" event={"ID":"4a75b8f7-9163-442a-b480-69ec985e541d","Type":"ContainerStarted","Data":"026913ed5a2f50494c506a81c53d7188a531059147351d71690396a8be978565"} Nov 25 08:37:53 crc kubenswrapper[5043]: I1125 08:37:53.777986 5043 generic.go:334] "Generic (PLEG): container finished" podID="4a75b8f7-9163-442a-b480-69ec985e541d" containerID="026913ed5a2f50494c506a81c53d7188a531059147351d71690396a8be978565" exitCode=0 Nov 25 08:37:53 crc kubenswrapper[5043]: I1125 08:37:53.778047 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdcqr" event={"ID":"4a75b8f7-9163-442a-b480-69ec985e541d","Type":"ContainerDied","Data":"026913ed5a2f50494c506a81c53d7188a531059147351d71690396a8be978565"} Nov 25 08:37:54 crc kubenswrapper[5043]: I1125 08:37:54.787994 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdcqr" event={"ID":"4a75b8f7-9163-442a-b480-69ec985e541d","Type":"ContainerStarted","Data":"1fdeb1af9088ebe6d698b0132ab9ec9502c93f1820772a04984365cccfc1532a"} Nov 25 08:37:54 crc kubenswrapper[5043]: I1125 08:37:54.814438 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tdcqr" podStartSLOduration=2.082804933 podStartE2EDuration="4.814419332s" podCreationTimestamp="2025-11-25 08:37:50 +0000 UTC" firstStartedPulling="2025-11-25 08:37:51.757908693 +0000 UTC m=+4935.926104414" lastFinishedPulling="2025-11-25 08:37:54.489523082 +0000 UTC m=+4938.657718813" observedRunningTime="2025-11-25 08:37:54.809121819 +0000 UTC m=+4938.977317550" watchObservedRunningTime="2025-11-25 08:37:54.814419332 +0000 UTC m=+4938.982615053" Nov 25 08:38:00 crc kubenswrapper[5043]: I1125 08:38:00.534248 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tdcqr" Nov 25 08:38:00 crc kubenswrapper[5043]: I1125 08:38:00.536744 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tdcqr" Nov 25 08:38:00 crc kubenswrapper[5043]: I1125 08:38:00.590664 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tdcqr" Nov 25 08:38:00 crc kubenswrapper[5043]: I1125 08:38:00.894709 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tdcqr" Nov 25 08:38:00 crc kubenswrapper[5043]: I1125 08:38:00.943046 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tdcqr"] Nov 25 08:38:02 crc kubenswrapper[5043]: I1125 08:38:02.861964 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tdcqr" podUID="4a75b8f7-9163-442a-b480-69ec985e541d" containerName="registry-server" containerID="cri-o://1fdeb1af9088ebe6d698b0132ab9ec9502c93f1820772a04984365cccfc1532a" gracePeriod=2 Nov 25 08:38:03 crc kubenswrapper[5043]: I1125 08:38:03.876276 5043 generic.go:334] "Generic (PLEG): container finished" podID="4a75b8f7-9163-442a-b480-69ec985e541d" containerID="1fdeb1af9088ebe6d698b0132ab9ec9502c93f1820772a04984365cccfc1532a" exitCode=0 Nov 25 08:38:03 crc kubenswrapper[5043]: I1125 08:38:03.876328 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdcqr" event={"ID":"4a75b8f7-9163-442a-b480-69ec985e541d","Type":"ContainerDied","Data":"1fdeb1af9088ebe6d698b0132ab9ec9502c93f1820772a04984365cccfc1532a"} Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.214912 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tdcqr" Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.263438 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a75b8f7-9163-442a-b480-69ec985e541d-utilities\") pod \"4a75b8f7-9163-442a-b480-69ec985e541d\" (UID: \"4a75b8f7-9163-442a-b480-69ec985e541d\") " Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.263783 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scckc\" (UniqueName: \"kubernetes.io/projected/4a75b8f7-9163-442a-b480-69ec985e541d-kube-api-access-scckc\") pod \"4a75b8f7-9163-442a-b480-69ec985e541d\" (UID: \"4a75b8f7-9163-442a-b480-69ec985e541d\") " Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.263937 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a75b8f7-9163-442a-b480-69ec985e541d-catalog-content\") pod \"4a75b8f7-9163-442a-b480-69ec985e541d\" (UID: \"4a75b8f7-9163-442a-b480-69ec985e541d\") " Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.265751 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a75b8f7-9163-442a-b480-69ec985e541d-utilities" (OuterVolumeSpecName: "utilities") pod "4a75b8f7-9163-442a-b480-69ec985e541d" (UID: "4a75b8f7-9163-442a-b480-69ec985e541d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.272891 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a75b8f7-9163-442a-b480-69ec985e541d-kube-api-access-scckc" (OuterVolumeSpecName: "kube-api-access-scckc") pod "4a75b8f7-9163-442a-b480-69ec985e541d" (UID: "4a75b8f7-9163-442a-b480-69ec985e541d"). InnerVolumeSpecName "kube-api-access-scckc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.290220 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a75b8f7-9163-442a-b480-69ec985e541d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a75b8f7-9163-442a-b480-69ec985e541d" (UID: "4a75b8f7-9163-442a-b480-69ec985e541d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.366868 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scckc\" (UniqueName: \"kubernetes.io/projected/4a75b8f7-9163-442a-b480-69ec985e541d-kube-api-access-scckc\") on node \"crc\" DevicePath \"\"" Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.366902 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a75b8f7-9163-442a-b480-69ec985e541d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.366911 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a75b8f7-9163-442a-b480-69ec985e541d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.886420 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdcqr" event={"ID":"4a75b8f7-9163-442a-b480-69ec985e541d","Type":"ContainerDied","Data":"03d1deb48954ed8bdffd72d7c7ad8eac5b17afd3aa3968f4919a299c19d70cc4"} Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.886483 5043 scope.go:117] "RemoveContainer" containerID="1fdeb1af9088ebe6d698b0132ab9ec9502c93f1820772a04984365cccfc1532a" Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.886501 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tdcqr" Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.907188 5043 scope.go:117] "RemoveContainer" containerID="026913ed5a2f50494c506a81c53d7188a531059147351d71690396a8be978565" Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.926236 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tdcqr"] Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.941466 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tdcqr"] Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.945862 5043 scope.go:117] "RemoveContainer" containerID="880bf90204bbdb461343a50ded83928639a8b79b818e05043d1ee0a7c02f9f2b" Nov 25 08:38:04 crc kubenswrapper[5043]: I1125 08:38:04.975098 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a75b8f7-9163-442a-b480-69ec985e541d" path="/var/lib/kubelet/pods/4a75b8f7-9163-442a-b480-69ec985e541d/volumes" Nov 25 08:38:17 crc kubenswrapper[5043]: I1125 08:38:17.276123 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:38:17 crc kubenswrapper[5043]: I1125 08:38:17.276663 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:38:47 crc kubenswrapper[5043]: I1125 08:38:47.276308 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:38:47 crc kubenswrapper[5043]: I1125 08:38:47.277005 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:38:47 crc kubenswrapper[5043]: I1125 08:38:47.277075 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 08:38:47 crc kubenswrapper[5043]: I1125 08:38:47.277946 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:38:47 crc kubenswrapper[5043]: I1125 08:38:47.278050 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" gracePeriod=600 Nov 25 08:38:47 crc kubenswrapper[5043]: E1125 08:38:47.413096 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:38:48 crc kubenswrapper[5043]: I1125 08:38:48.266375 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" exitCode=0 Nov 25 08:38:48 crc kubenswrapper[5043]: I1125 08:38:48.266466 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1"} Nov 25 08:38:48 crc kubenswrapper[5043]: I1125 08:38:48.266822 5043 scope.go:117] "RemoveContainer" containerID="7debc521cf0d1b4377e91cdb36a8856875226c6572dfb0a6d374f4af3231143d" Nov 25 08:38:48 crc kubenswrapper[5043]: I1125 08:38:48.270575 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:38:48 crc kubenswrapper[5043]: E1125 08:38:48.271290 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:38:49 crc kubenswrapper[5043]: I1125 08:38:49.918369 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 08:38:49 crc kubenswrapper[5043]: E1125 08:38:49.919131 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a75b8f7-9163-442a-b480-69ec985e541d" containerName="extract-content" Nov 25 08:38:49 crc kubenswrapper[5043]: I1125 08:38:49.919147 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a75b8f7-9163-442a-b480-69ec985e541d" containerName="extract-content" Nov 25 08:38:49 crc kubenswrapper[5043]: E1125 08:38:49.919173 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a75b8f7-9163-442a-b480-69ec985e541d" containerName="extract-utilities" Nov 25 08:38:49 crc kubenswrapper[5043]: I1125 08:38:49.919182 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a75b8f7-9163-442a-b480-69ec985e541d" containerName="extract-utilities" Nov 25 08:38:49 crc kubenswrapper[5043]: E1125 08:38:49.919222 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a75b8f7-9163-442a-b480-69ec985e541d" containerName="registry-server" Nov 25 08:38:49 crc kubenswrapper[5043]: I1125 08:38:49.919230 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a75b8f7-9163-442a-b480-69ec985e541d" containerName="registry-server" Nov 25 08:38:49 crc kubenswrapper[5043]: I1125 08:38:49.919459 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a75b8f7-9163-442a-b480-69ec985e541d" containerName="registry-server" Nov 25 08:38:49 crc kubenswrapper[5043]: I1125 08:38:49.920245 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 08:38:49 crc kubenswrapper[5043]: I1125 08:38:49.922864 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 08:38:49 crc kubenswrapper[5043]: I1125 08:38:49.926409 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 08:38:49 crc kubenswrapper[5043]: I1125 08:38:49.958388 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 08:38:50 crc kubenswrapper[5043]: I1125 08:38:50.018168 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1562f0e-a561-4732-a265-e4d582e6aa8e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b1562f0e-a561-4732-a265-e4d582e6aa8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 08:38:50 crc kubenswrapper[5043]: I1125 08:38:50.018301 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1562f0e-a561-4732-a265-e4d582e6aa8e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b1562f0e-a561-4732-a265-e4d582e6aa8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 08:38:50 crc kubenswrapper[5043]: I1125 08:38:50.120042 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1562f0e-a561-4732-a265-e4d582e6aa8e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b1562f0e-a561-4732-a265-e4d582e6aa8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 08:38:50 crc kubenswrapper[5043]: I1125 08:38:50.120204 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1562f0e-a561-4732-a265-e4d582e6aa8e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b1562f0e-a561-4732-a265-e4d582e6aa8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 08:38:50 crc kubenswrapper[5043]: I1125 08:38:50.120362 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1562f0e-a561-4732-a265-e4d582e6aa8e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b1562f0e-a561-4732-a265-e4d582e6aa8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 08:38:50 crc kubenswrapper[5043]: I1125 08:38:50.141537 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1562f0e-a561-4732-a265-e4d582e6aa8e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b1562f0e-a561-4732-a265-e4d582e6aa8e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 08:38:50 crc kubenswrapper[5043]: I1125 08:38:50.252453 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 08:38:50 crc kubenswrapper[5043]: I1125 08:38:50.704638 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 08:38:51 crc kubenswrapper[5043]: I1125 08:38:51.308557 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b1562f0e-a561-4732-a265-e4d582e6aa8e","Type":"ContainerStarted","Data":"07a00d61d1f909eb20a828fd50b8bfdd3b4734cd9d572e30b792a9335b942a9b"} Nov 25 08:38:52 crc kubenswrapper[5043]: I1125 08:38:52.318388 5043 generic.go:334] "Generic (PLEG): container finished" podID="b1562f0e-a561-4732-a265-e4d582e6aa8e" containerID="d74df861e3b04e304409697479815b21e1b655869b3eb65a7d6b92c369ea3c71" exitCode=0 Nov 25 08:38:52 crc kubenswrapper[5043]: I1125 08:38:52.318469 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b1562f0e-a561-4732-a265-e4d582e6aa8e","Type":"ContainerDied","Data":"d74df861e3b04e304409697479815b21e1b655869b3eb65a7d6b92c369ea3c71"} Nov 25 08:38:53 crc kubenswrapper[5043]: I1125 08:38:53.836712 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 08:38:54 crc kubenswrapper[5043]: I1125 08:38:54.000525 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1562f0e-a561-4732-a265-e4d582e6aa8e-kube-api-access\") pod \"b1562f0e-a561-4732-a265-e4d582e6aa8e\" (UID: \"b1562f0e-a561-4732-a265-e4d582e6aa8e\") " Nov 25 08:38:54 crc kubenswrapper[5043]: I1125 08:38:54.000647 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1562f0e-a561-4732-a265-e4d582e6aa8e-kubelet-dir\") pod \"b1562f0e-a561-4732-a265-e4d582e6aa8e\" (UID: \"b1562f0e-a561-4732-a265-e4d582e6aa8e\") " Nov 25 08:38:54 crc kubenswrapper[5043]: I1125 08:38:54.000750 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1562f0e-a561-4732-a265-e4d582e6aa8e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b1562f0e-a561-4732-a265-e4d582e6aa8e" (UID: "b1562f0e-a561-4732-a265-e4d582e6aa8e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:38:54 crc kubenswrapper[5043]: I1125 08:38:54.001183 5043 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1562f0e-a561-4732-a265-e4d582e6aa8e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 08:38:54 crc kubenswrapper[5043]: I1125 08:38:54.020306 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1562f0e-a561-4732-a265-e4d582e6aa8e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b1562f0e-a561-4732-a265-e4d582e6aa8e" (UID: "b1562f0e-a561-4732-a265-e4d582e6aa8e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:38:54 crc kubenswrapper[5043]: I1125 08:38:54.102687 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1562f0e-a561-4732-a265-e4d582e6aa8e-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 08:38:54 crc kubenswrapper[5043]: I1125 08:38:54.337899 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b1562f0e-a561-4732-a265-e4d582e6aa8e","Type":"ContainerDied","Data":"07a00d61d1f909eb20a828fd50b8bfdd3b4734cd9d572e30b792a9335b942a9b"} Nov 25 08:38:54 crc kubenswrapper[5043]: I1125 08:38:54.338236 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07a00d61d1f909eb20a828fd50b8bfdd3b4734cd9d572e30b792a9335b942a9b" Nov 25 08:38:54 crc kubenswrapper[5043]: I1125 08:38:54.337989 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 08:38:56 crc kubenswrapper[5043]: I1125 08:38:56.900643 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 08:38:56 crc kubenswrapper[5043]: E1125 08:38:56.901633 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1562f0e-a561-4732-a265-e4d582e6aa8e" containerName="pruner" Nov 25 08:38:56 crc kubenswrapper[5043]: I1125 08:38:56.901650 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1562f0e-a561-4732-a265-e4d582e6aa8e" containerName="pruner" Nov 25 08:38:56 crc kubenswrapper[5043]: I1125 08:38:56.901882 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1562f0e-a561-4732-a265-e4d582e6aa8e" containerName="pruner" Nov 25 08:38:56 crc kubenswrapper[5043]: I1125 08:38:56.902497 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 08:38:56 crc kubenswrapper[5043]: I1125 08:38:56.904520 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 08:38:56 crc kubenswrapper[5043]: I1125 08:38:56.910374 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 08:38:56 crc kubenswrapper[5043]: I1125 08:38:56.912707 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 08:38:57 crc kubenswrapper[5043]: I1125 08:38:57.071792 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-kube-api-access\") pod \"installer-9-crc\" (UID: \"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 08:38:57 crc kubenswrapper[5043]: I1125 08:38:57.071986 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-var-lock\") pod \"installer-9-crc\" (UID: \"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 08:38:57 crc kubenswrapper[5043]: I1125 08:38:57.072042 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 08:38:57 crc kubenswrapper[5043]: I1125 08:38:57.174629 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-kube-api-access\") pod \"installer-9-crc\" (UID: \"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 08:38:57 crc kubenswrapper[5043]: I1125 08:38:57.174983 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-var-lock\") pod \"installer-9-crc\" (UID: \"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 08:38:57 crc kubenswrapper[5043]: I1125 08:38:57.175871 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-var-lock\") pod \"installer-9-crc\" (UID: \"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 08:38:57 crc kubenswrapper[5043]: I1125 08:38:57.175968 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 08:38:57 crc kubenswrapper[5043]: I1125 08:38:57.176098 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 08:38:57 crc kubenswrapper[5043]: I1125 08:38:57.563513 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-kube-api-access\") pod \"installer-9-crc\" (UID: \"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 08:38:57 crc kubenswrapper[5043]: I1125 08:38:57.854211 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 08:38:58 crc kubenswrapper[5043]: I1125 08:38:58.392864 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 08:38:59 crc kubenswrapper[5043]: I1125 08:38:59.388734 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c","Type":"ContainerStarted","Data":"8c2d4b55fa75514cecc1ed31a223d5529662d25712a950d29a82f436f7e51324"} Nov 25 08:38:59 crc kubenswrapper[5043]: I1125 08:38:59.389165 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c","Type":"ContainerStarted","Data":"e467340c7e92254be0106f0ebf0c131f841d9ef0883f2abf1ba0bcbe5150221c"} Nov 25 08:38:59 crc kubenswrapper[5043]: I1125 08:38:59.405999 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.405982974 podStartE2EDuration="3.405982974s" podCreationTimestamp="2025-11-25 08:38:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:38:59.400548747 +0000 UTC m=+5003.568744468" watchObservedRunningTime="2025-11-25 08:38:59.405982974 +0000 UTC m=+5003.574178685" Nov 25 08:38:59 crc kubenswrapper[5043]: I1125 08:38:59.977203 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:38:59 crc kubenswrapper[5043]: E1125 08:38:59.979072 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:39:13 crc kubenswrapper[5043]: I1125 08:39:13.962908 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:39:13 crc kubenswrapper[5043]: E1125 08:39:13.963783 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:39:24 crc kubenswrapper[5043]: I1125 08:39:24.962992 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:39:24 crc kubenswrapper[5043]: E1125 08:39:24.963773 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.896557 5043 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.898268 5043 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.898489 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f" gracePeriod=15 Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.898591 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982" gracePeriod=15 Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.898648 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df" gracePeriod=15 Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.898681 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915" gracePeriod=15 Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.898812 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.898925 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6" gracePeriod=15 Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.899192 5043 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 08:39:36 crc kubenswrapper[5043]: E1125 08:39:36.899542 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.899571 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 08:39:36 crc kubenswrapper[5043]: E1125 08:39:36.899587 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.899594 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 08:39:36 crc kubenswrapper[5043]: E1125 08:39:36.899625 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.899633 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 08:39:36 crc kubenswrapper[5043]: E1125 08:39:36.899646 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.899652 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 08:39:36 crc kubenswrapper[5043]: E1125 08:39:36.899662 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.899667 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 08:39:36 crc kubenswrapper[5043]: E1125 08:39:36.899677 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.899702 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 08:39:36 crc kubenswrapper[5043]: E1125 08:39:36.899712 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.899717 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 08:39:36 crc kubenswrapper[5043]: E1125 08:39:36.899733 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.899738 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.899985 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.900027 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.900043 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.900058 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.900066 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.900075 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.900108 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.943886 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.968788 5043 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.969165 5043 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:36 crc kubenswrapper[5043]: E1125 08:39:36.987805 5043 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/mysql-db-openstack-galera-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/mysql-db-openstack-galera-0\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openstack/openstack-galera-0" volumeName="mysql-db" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.988700 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.988833 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.988937 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.988976 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.989123 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.989182 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.989216 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:39:36 crc kubenswrapper[5043]: I1125 08:39:36.989277 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.090194 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.090270 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.090301 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.090316 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.090346 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.090377 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.090401 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.090385 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.090447 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.090476 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.090509 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.090532 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.090535 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.090592 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.090630 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.090632 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.245252 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:39:37 crc kubenswrapper[5043]: W1125 08:39:37.463477 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-59eee6ed98e18191bb512d094074cc0fef5b5f8f90690b32ebabe07d6a0a036f WatchSource:0}: Error finding container 59eee6ed98e18191bb512d094074cc0fef5b5f8f90690b32ebabe07d6a0a036f: Status 404 returned error can't find the container with id 59eee6ed98e18191bb512d094074cc0fef5b5f8f90690b32ebabe07d6a0a036f Nov 25 08:39:37 crc kubenswrapper[5043]: E1125 08:39:37.467937 5043 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b3333ff6cfec2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 08:39:37.46736301 +0000 UTC m=+5041.635558731,LastTimestamp:2025-11-25 08:39:37.46736301 +0000 UTC m=+5041.635558731,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.749187 5043 generic.go:334] "Generic (PLEG): container finished" podID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" containerID="8c2d4b55fa75514cecc1ed31a223d5529662d25712a950d29a82f436f7e51324" exitCode=0 Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.749288 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c","Type":"ContainerDied","Data":"8c2d4b55fa75514cecc1ed31a223d5529662d25712a950d29a82f436f7e51324"} Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.750242 5043 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.750647 5043 status_manager.go:851] "Failed to get status for pod" podUID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.752551 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.754756 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.756965 5043 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6" exitCode=0 Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.757333 5043 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982" exitCode=0 Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.757050 5043 scope.go:117] "RemoveContainer" containerID="e852a63b52ac8f09ac2d32cb7b746f1997c26547021ef64b607248ed19985054" Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.757347 5043 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df" exitCode=0 Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.757501 5043 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915" exitCode=2 Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.759896 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"59eee6ed98e18191bb512d094074cc0fef5b5f8f90690b32ebabe07d6a0a036f"} Nov 25 08:39:37 crc kubenswrapper[5043]: I1125 08:39:37.962866 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:39:37 crc kubenswrapper[5043]: E1125 08:39:37.963148 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:39:38 crc kubenswrapper[5043]: I1125 08:39:38.770356 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 08:39:38 crc kubenswrapper[5043]: I1125 08:39:38.772990 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8cdaf8855b52857ca11c3db0b21d1bb7428f1817f4566a29f63234cbe8abb5f8"} Nov 25 08:39:38 crc kubenswrapper[5043]: I1125 08:39:38.774944 5043 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:38 crc kubenswrapper[5043]: I1125 08:39:38.775424 5043 status_manager.go:851] "Failed to get status for pod" podUID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.383912 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="c937dff6-4203-455c-b07a-ec16e23c746f" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.477339 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.478781 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.480137 5043 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.480897 5043 status_manager.go:851] "Failed to get status for pod" podUID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.481298 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.481459 5043 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.481982 5043 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.482375 5043 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.482652 5043 status_manager.go:851] "Failed to get status for pod" podUID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.543348 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.543440 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.543502 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.543544 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-kubelet-dir\") pod \"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c\" (UID: \"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c\") " Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.543553 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.543658 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-kube-api-access\") pod \"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c\" (UID: \"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c\") " Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.543654 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" (UID: "d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.543733 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.543851 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-var-lock\") pod \"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c\" (UID: \"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c\") " Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.543878 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.543972 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-var-lock" (OuterVolumeSpecName: "var-lock") pod "d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" (UID: "d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.544344 5043 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.544367 5043 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.544377 5043 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.544390 5043 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.544402 5043 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.550968 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" (UID: "d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.645222 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.783733 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.784541 5043 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f" exitCode=0 Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.784664 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.784710 5043 scope.go:117] "RemoveContainer" containerID="edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.788292 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.788490 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c","Type":"ContainerDied","Data":"e467340c7e92254be0106f0ebf0c131f841d9ef0883f2abf1ba0bcbe5150221c"} Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.788552 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e467340c7e92254be0106f0ebf0c131f841d9ef0883f2abf1ba0bcbe5150221c" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.799779 5043 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.800187 5043 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.800414 5043 status_manager.go:851] "Failed to get status for pod" podUID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.805205 5043 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.805518 5043 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.805953 5043 status_manager.go:851] "Failed to get status for pod" podUID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.811222 5043 scope.go:117] "RemoveContainer" containerID="7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.831777 5043 scope.go:117] "RemoveContainer" containerID="29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.853105 5043 scope.go:117] "RemoveContainer" containerID="4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.874689 5043 scope.go:117] "RemoveContainer" containerID="fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.904134 5043 scope.go:117] "RemoveContainer" containerID="426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.947374 5043 scope.go:117] "RemoveContainer" containerID="edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6" Nov 25 08:39:39 crc kubenswrapper[5043]: E1125 08:39:39.947954 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\": container with ID starting with edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6 not found: ID does not exist" containerID="edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.948002 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6"} err="failed to get container status \"edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\": rpc error: code = NotFound desc = could not find container \"edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6\": container with ID starting with edb99f94f3dfe47b597caaf6c6a3ccecb616c1bd74c55b792256c788539169a6 not found: ID does not exist" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.948034 5043 scope.go:117] "RemoveContainer" containerID="7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982" Nov 25 08:39:39 crc kubenswrapper[5043]: E1125 08:39:39.948450 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\": container with ID starting with 7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982 not found: ID does not exist" containerID="7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.948485 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982"} err="failed to get container status \"7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\": rpc error: code = NotFound desc = could not find container \"7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982\": container with ID starting with 7769d50a3f3f0349573098c3e544ba36cc72e36c5be4475fed16b5b924107982 not found: ID does not exist" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.948513 5043 scope.go:117] "RemoveContainer" containerID="29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df" Nov 25 08:39:39 crc kubenswrapper[5043]: E1125 08:39:39.948918 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\": container with ID starting with 29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df not found: ID does not exist" containerID="29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.948955 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df"} err="failed to get container status \"29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\": rpc error: code = NotFound desc = could not find container \"29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df\": container with ID starting with 29588a294b33533825c3ff526e4bff6c8c26fa6af4f25b5f2c7df1fe41c7d0df not found: ID does not exist" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.948977 5043 scope.go:117] "RemoveContainer" containerID="4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915" Nov 25 08:39:39 crc kubenswrapper[5043]: E1125 08:39:39.951103 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\": container with ID starting with 4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915 not found: ID does not exist" containerID="4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.951180 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915"} err="failed to get container status \"4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\": rpc error: code = NotFound desc = could not find container \"4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915\": container with ID starting with 4304daa8b3dd6af5d40433487645381cfb63d55f01e78e018f3121625a610915 not found: ID does not exist" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.951213 5043 scope.go:117] "RemoveContainer" containerID="fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f" Nov 25 08:39:39 crc kubenswrapper[5043]: E1125 08:39:39.951562 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\": container with ID starting with fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f not found: ID does not exist" containerID="fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.951632 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f"} err="failed to get container status \"fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\": rpc error: code = NotFound desc = could not find container \"fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f\": container with ID starting with fdfef409a128c3275286d56fe3b45e9faa8fb9166dca68e12d7daafc41bacd7f not found: ID does not exist" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.951836 5043 scope.go:117] "RemoveContainer" containerID="426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc" Nov 25 08:39:39 crc kubenswrapper[5043]: E1125 08:39:39.952151 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\": container with ID starting with 426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc not found: ID does not exist" containerID="426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc" Nov 25 08:39:39 crc kubenswrapper[5043]: I1125 08:39:39.952203 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc"} err="failed to get container status \"426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\": rpc error: code = NotFound desc = could not find container \"426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc\": container with ID starting with 426feb903362ad01343271e37eb0173fbead18700587cb92ca0cd019beac79dc not found: ID does not exist" Nov 25 08:39:40 crc kubenswrapper[5043]: I1125 08:39:40.973773 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 25 08:39:46 crc kubenswrapper[5043]: E1125 08:39:46.505757 5043 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:46 crc kubenswrapper[5043]: E1125 08:39:46.507689 5043 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:46 crc kubenswrapper[5043]: E1125 08:39:46.508354 5043 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:46 crc kubenswrapper[5043]: E1125 08:39:46.508785 5043 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:46 crc kubenswrapper[5043]: E1125 08:39:46.509285 5043 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:46 crc kubenswrapper[5043]: I1125 08:39:46.509326 5043 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 25 08:39:46 crc kubenswrapper[5043]: E1125 08:39:46.509671 5043 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="200ms" Nov 25 08:39:46 crc kubenswrapper[5043]: E1125 08:39:46.651672 5043 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.162:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b3333ff6cfec2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 08:39:37.46736301 +0000 UTC m=+5041.635558731,LastTimestamp:2025-11-25 08:39:37.46736301 +0000 UTC m=+5041.635558731,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 08:39:46 crc kubenswrapper[5043]: E1125 08:39:46.710849 5043 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="400ms" Nov 25 08:39:46 crc kubenswrapper[5043]: I1125 08:39:46.969901 5043 status_manager.go:851] "Failed to get status for pod" podUID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:46 crc kubenswrapper[5043]: I1125 08:39:46.970219 5043 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:46 crc kubenswrapper[5043]: E1125 08:39:46.992240 5043 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/glance-glance-default-internal-api-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/glance-glance-default-internal-api-0\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openstack/glance-default-internal-api-0" volumeName="glance" Nov 25 08:39:47 crc kubenswrapper[5043]: E1125 08:39:47.111950 5043 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="800ms" Nov 25 08:39:47 crc kubenswrapper[5043]: E1125 08:39:47.912865 5043 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="1.6s" Nov 25 08:39:48 crc kubenswrapper[5043]: I1125 08:39:48.860181 5043 generic.go:334] "Generic (PLEG): container finished" podID="8cfc66d8-27da-4bce-9a5f-62a019bfd836" containerID="a601a2924b9d636ce78e06cb1143c518646cfa1be008e75927a842d86036405f" exitCode=1 Nov 25 08:39:48 crc kubenswrapper[5043]: I1125 08:39:48.860270 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" event={"ID":"8cfc66d8-27da-4bce-9a5f-62a019bfd836","Type":"ContainerDied","Data":"a601a2924b9d636ce78e06cb1143c518646cfa1be008e75927a842d86036405f"} Nov 25 08:39:48 crc kubenswrapper[5043]: I1125 08:39:48.861144 5043 scope.go:117] "RemoveContainer" containerID="a601a2924b9d636ce78e06cb1143c518646cfa1be008e75927a842d86036405f" Nov 25 08:39:48 crc kubenswrapper[5043]: I1125 08:39:48.861431 5043 status_manager.go:851] "Failed to get status for pod" podUID="8cfc66d8-27da-4bce-9a5f-62a019bfd836" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-556c5c9c9c-82qgw\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:48 crc kubenswrapper[5043]: I1125 08:39:48.861907 5043 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:48 crc kubenswrapper[5043]: I1125 08:39:48.862132 5043 status_manager.go:851] "Failed to get status for pod" podUID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:48 crc kubenswrapper[5043]: I1125 08:39:48.862344 5043 generic.go:334] "Generic (PLEG): container finished" podID="cdbab2e0-494c-4845-a500-88b26934f1c7" containerID="f87486126cd112c0054b9155c69bde370467d4d74ee81db0a0ba7dd0605b790a" exitCode=1 Nov 25 08:39:48 crc kubenswrapper[5043]: I1125 08:39:48.862384 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" event={"ID":"cdbab2e0-494c-4845-a500-88b26934f1c7","Type":"ContainerDied","Data":"f87486126cd112c0054b9155c69bde370467d4d74ee81db0a0ba7dd0605b790a"} Nov 25 08:39:48 crc kubenswrapper[5043]: I1125 08:39:48.863111 5043 scope.go:117] "RemoveContainer" containerID="f87486126cd112c0054b9155c69bde370467d4d74ee81db0a0ba7dd0605b790a" Nov 25 08:39:48 crc kubenswrapper[5043]: I1125 08:39:48.863183 5043 status_manager.go:851] "Failed to get status for pod" podUID="8cfc66d8-27da-4bce-9a5f-62a019bfd836" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-556c5c9c9c-82qgw\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:48 crc kubenswrapper[5043]: I1125 08:39:48.863405 5043 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:48 crc kubenswrapper[5043]: I1125 08:39:48.863622 5043 status_manager.go:851] "Failed to get status for pod" podUID="cdbab2e0-494c-4845-a500-88b26934f1c7" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-85bdd6cc97-lrkkr\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:48 crc kubenswrapper[5043]: I1125 08:39:48.863989 5043 status_manager.go:851] "Failed to get status for pod" podUID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.076736 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.388006 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="c937dff6-4203-455c-b07a-ec16e23c746f" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 08:39:49 crc kubenswrapper[5043]: E1125 08:39:49.514077 5043 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.162:6443: connect: connection refused" interval="3.2s" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.872691 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" event={"ID":"8cfc66d8-27da-4bce-9a5f-62a019bfd836","Type":"ContainerStarted","Data":"c6027b7c892bfbe929485c60bb29cbf9eb2d79036911cfbbfddfd1b513838b69"} Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.874175 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.874290 5043 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.874504 5043 status_manager.go:851] "Failed to get status for pod" podUID="cdbab2e0-494c-4845-a500-88b26934f1c7" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-85bdd6cc97-lrkkr\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.874879 5043 status_manager.go:851] "Failed to get status for pod" podUID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.875061 5043 status_manager.go:851] "Failed to get status for pod" podUID="8cfc66d8-27da-4bce-9a5f-62a019bfd836" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-556c5c9c9c-82qgw\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.876696 5043 generic.go:334] "Generic (PLEG): container finished" podID="cdbab2e0-494c-4845-a500-88b26934f1c7" containerID="a66b9b201d4d0bf19d72cd0aff83a2f14a28a6983ae1fc9474a2bf452a5cfb8b" exitCode=1 Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.876733 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" event={"ID":"cdbab2e0-494c-4845-a500-88b26934f1c7","Type":"ContainerDied","Data":"a66b9b201d4d0bf19d72cd0aff83a2f14a28a6983ae1fc9474a2bf452a5cfb8b"} Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.876762 5043 scope.go:117] "RemoveContainer" containerID="f87486126cd112c0054b9155c69bde370467d4d74ee81db0a0ba7dd0605b790a" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.877261 5043 scope.go:117] "RemoveContainer" containerID="a66b9b201d4d0bf19d72cd0aff83a2f14a28a6983ae1fc9474a2bf452a5cfb8b" Nov 25 08:39:49 crc kubenswrapper[5043]: E1125 08:39:49.877493 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-85bdd6cc97-lrkkr_metallb-system(cdbab2e0-494c-4845-a500-88b26934f1c7)\"" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" podUID="cdbab2e0-494c-4845-a500-88b26934f1c7" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.877869 5043 status_manager.go:851] "Failed to get status for pod" podUID="8cfc66d8-27da-4bce-9a5f-62a019bfd836" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-556c5c9c9c-82qgw\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.878215 5043 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.878371 5043 status_manager.go:851] "Failed to get status for pod" podUID="cdbab2e0-494c-4845-a500-88b26934f1c7" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-85bdd6cc97-lrkkr\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.878526 5043 status_manager.go:851] "Failed to get status for pod" podUID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.962082 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.963713 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.964004 5043 status_manager.go:851] "Failed to get status for pod" podUID="8cfc66d8-27da-4bce-9a5f-62a019bfd836" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-556c5c9c9c-82qgw\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:49 crc kubenswrapper[5043]: E1125 08:39:49.964039 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.964779 5043 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.965080 5043 status_manager.go:851] "Failed to get status for pod" podUID="cdbab2e0-494c-4845-a500-88b26934f1c7" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-85bdd6cc97-lrkkr\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.965388 5043 status_manager.go:851] "Failed to get status for pod" podUID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.977819 5043 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26bc8613-79e6-42d4-b2ae-fe1c78a750fe" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.977857 5043 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26bc8613-79e6-42d4-b2ae-fe1c78a750fe" Nov 25 08:39:49 crc kubenswrapper[5043]: E1125 08:39:49.978328 5043 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:49 crc kubenswrapper[5043]: I1125 08:39:49.978997 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:50 crc kubenswrapper[5043]: W1125 08:39:50.017864 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-edab1f8ba666c0d1d12a6b55a9f4bfc43bdc3d352f3eed89b3bc630251a2c420 WatchSource:0}: Error finding container edab1f8ba666c0d1d12a6b55a9f4bfc43bdc3d352f3eed89b3bc630251a2c420: Status 404 returned error can't find the container with id edab1f8ba666c0d1d12a6b55a9f4bfc43bdc3d352f3eed89b3bc630251a2c420 Nov 25 08:39:50 crc kubenswrapper[5043]: I1125 08:39:50.891073 5043 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="02479700eedd9c32d2f284c59887e822e1ee6f72a9cc61e14ef13f601550a5eb" exitCode=0 Nov 25 08:39:50 crc kubenswrapper[5043]: I1125 08:39:50.891464 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"02479700eedd9c32d2f284c59887e822e1ee6f72a9cc61e14ef13f601550a5eb"} Nov 25 08:39:50 crc kubenswrapper[5043]: I1125 08:39:50.891696 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"edab1f8ba666c0d1d12a6b55a9f4bfc43bdc3d352f3eed89b3bc630251a2c420"} Nov 25 08:39:50 crc kubenswrapper[5043]: I1125 08:39:50.891951 5043 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26bc8613-79e6-42d4-b2ae-fe1c78a750fe" Nov 25 08:39:50 crc kubenswrapper[5043]: I1125 08:39:50.891964 5043 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26bc8613-79e6-42d4-b2ae-fe1c78a750fe" Nov 25 08:39:50 crc kubenswrapper[5043]: I1125 08:39:50.892319 5043 status_manager.go:851] "Failed to get status for pod" podUID="8cfc66d8-27da-4bce-9a5f-62a019bfd836" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-556c5c9c9c-82qgw\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:50 crc kubenswrapper[5043]: E1125 08:39:50.892348 5043 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:50 crc kubenswrapper[5043]: I1125 08:39:50.892479 5043 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:50 crc kubenswrapper[5043]: I1125 08:39:50.892651 5043 status_manager.go:851] "Failed to get status for pod" podUID="cdbab2e0-494c-4845-a500-88b26934f1c7" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-85bdd6cc97-lrkkr\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:50 crc kubenswrapper[5043]: I1125 08:39:50.892833 5043 status_manager.go:851] "Failed to get status for pod" podUID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:50 crc kubenswrapper[5043]: I1125 08:39:50.894856 5043 scope.go:117] "RemoveContainer" containerID="a66b9b201d4d0bf19d72cd0aff83a2f14a28a6983ae1fc9474a2bf452a5cfb8b" Nov 25 08:39:50 crc kubenswrapper[5043]: E1125 08:39:50.895064 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-85bdd6cc97-lrkkr_metallb-system(cdbab2e0-494c-4845-a500-88b26934f1c7)\"" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" podUID="cdbab2e0-494c-4845-a500-88b26934f1c7" Nov 25 08:39:50 crc kubenswrapper[5043]: I1125 08:39:50.895638 5043 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:50 crc kubenswrapper[5043]: I1125 08:39:50.895886 5043 status_manager.go:851] "Failed to get status for pod" podUID="cdbab2e0-494c-4845-a500-88b26934f1c7" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-85bdd6cc97-lrkkr\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:50 crc kubenswrapper[5043]: I1125 08:39:50.896118 5043 status_manager.go:851] "Failed to get status for pod" podUID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:50 crc kubenswrapper[5043]: I1125 08:39:50.896383 5043 status_manager.go:851] "Failed to get status for pod" podUID="8cfc66d8-27da-4bce-9a5f-62a019bfd836" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-556c5c9c9c-82qgw\": dial tcp 38.102.83.162:6443: connect: connection refused" Nov 25 08:39:50 crc kubenswrapper[5043]: E1125 08:39:50.972943 5043 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0\": dial tcp 38.102.83.162:6443: connect: connection refused" pod="openstack/ovsdbserver-sb-0" volumeName="ovndbcluster-sb-etc-ovn" Nov 25 08:39:51 crc kubenswrapper[5043]: I1125 08:39:51.928131 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e5c4fb20712eaad0a332b3ed48bc588633e713fabe31cdcf70318bfc53398838"} Nov 25 08:39:51 crc kubenswrapper[5043]: I1125 08:39:51.928830 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e6fd40abe8a36f9aada23ffebcd1025bbcbe22540ef73bb9e03d8f8ceef2d2d4"} Nov 25 08:39:51 crc kubenswrapper[5043]: I1125 08:39:51.928847 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a6ba82d860f3718c0ecc2fffd7209d7c64491c01e7292b5a147688d303de0298"} Nov 25 08:39:51 crc kubenswrapper[5043]: I1125 08:39:51.937615 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 08:39:51 crc kubenswrapper[5043]: I1125 08:39:51.937657 5043 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106" exitCode=1 Nov 25 08:39:51 crc kubenswrapper[5043]: I1125 08:39:51.937688 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106"} Nov 25 08:39:51 crc kubenswrapper[5043]: I1125 08:39:51.938319 5043 scope.go:117] "RemoveContainer" containerID="54d9cc072893abee29a3fb3eb80e8928737d647f8e52da95d7777223ac4a2106" Nov 25 08:39:52 crc kubenswrapper[5043]: I1125 08:39:52.948718 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"acfd0138cb53be8a37f1f9a38091668472004766abd1ed8fb58da283a9c39601"} Nov 25 08:39:52 crc kubenswrapper[5043]: I1125 08:39:52.949229 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5bae8d2fd5129820232bd2ae5ba6e6d94cbd37fc50e947d420cb96d9b72f7972"} Nov 25 08:39:52 crc kubenswrapper[5043]: I1125 08:39:52.949250 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:52 crc kubenswrapper[5043]: I1125 08:39:52.949054 5043 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26bc8613-79e6-42d4-b2ae-fe1c78a750fe" Nov 25 08:39:52 crc kubenswrapper[5043]: I1125 08:39:52.949270 5043 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26bc8613-79e6-42d4-b2ae-fe1c78a750fe" Nov 25 08:39:52 crc kubenswrapper[5043]: I1125 08:39:52.951959 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 08:39:52 crc kubenswrapper[5043]: I1125 08:39:52.952036 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"da62978b093b3426c86bf853bc7d24a9ab94d9fc4a106f03713e55cad0bae20c"} Nov 25 08:39:54 crc kubenswrapper[5043]: I1125 08:39:54.979966 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:54 crc kubenswrapper[5043]: I1125 08:39:54.980265 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:54 crc kubenswrapper[5043]: I1125 08:39:54.986234 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:56 crc kubenswrapper[5043]: I1125 08:39:56.303075 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-556c5c9c9c-82qgw" Nov 25 08:39:57 crc kubenswrapper[5043]: I1125 08:39:57.477854 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:39:57 crc kubenswrapper[5043]: I1125 08:39:57.522143 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:39:57 crc kubenswrapper[5043]: I1125 08:39:57.526685 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:39:57 crc kubenswrapper[5043]: I1125 08:39:57.960296 5043 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.005360 5043 generic.go:334] "Generic (PLEG): container finished" podID="8ea2c827-762f-437d-ad30-a3568d7a4af1" containerID="3e8f789775d000389ff8a62066f32344548d5cac70d5dab18156d3183ce897a3" exitCode=1 Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.005439 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" event={"ID":"8ea2c827-762f-437d-ad30-a3568d7a4af1","Type":"ContainerDied","Data":"3e8f789775d000389ff8a62066f32344548d5cac70d5dab18156d3183ce897a3"} Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.006126 5043 scope.go:117] "RemoveContainer" containerID="3e8f789775d000389ff8a62066f32344548d5cac70d5dab18156d3183ce897a3" Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.007829 5043 generic.go:334] "Generic (PLEG): container finished" podID="ff874d31-8e5a-4c0b-8f9c-e63513a00483" containerID="c116bccae148120904962a84cc7cadffc6f0e0b759da2d7c946cf5e5381228e7" exitCode=1 Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.007867 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" event={"ID":"ff874d31-8e5a-4c0b-8f9c-e63513a00483","Type":"ContainerDied","Data":"c116bccae148120904962a84cc7cadffc6f0e0b759da2d7c946cf5e5381228e7"} Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.008169 5043 scope.go:117] "RemoveContainer" containerID="c116bccae148120904962a84cc7cadffc6f0e0b759da2d7c946cf5e5381228e7" Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.013883 5043 generic.go:334] "Generic (PLEG): container finished" podID="cdc9a1bf-b6d9-4a36-bcf8-55f87525da45" containerID="e7b731d42024281828b164ebdbdddedefd6c195e2846703f49280395adb6a405" exitCode=1 Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.013951 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" event={"ID":"cdc9a1bf-b6d9-4a36-bcf8-55f87525da45","Type":"ContainerDied","Data":"e7b731d42024281828b164ebdbdddedefd6c195e2846703f49280395adb6a405"} Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.014810 5043 scope.go:117] "RemoveContainer" containerID="e7b731d42024281828b164ebdbdddedefd6c195e2846703f49280395adb6a405" Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.016738 5043 generic.go:334] "Generic (PLEG): container finished" podID="d643e47d-246d-4551-a63c-9b9374e684b2" containerID="10181b15c5c44ae9e997127bb8a7167c3baf07b67b7d069085dc1bb6dc879907" exitCode=1 Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.016788 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" event={"ID":"d643e47d-246d-4551-a63c-9b9374e684b2","Type":"ContainerDied","Data":"10181b15c5c44ae9e997127bb8a7167c3baf07b67b7d069085dc1bb6dc879907"} Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.017171 5043 scope.go:117] "RemoveContainer" containerID="10181b15c5c44ae9e997127bb8a7167c3baf07b67b7d069085dc1bb6dc879907" Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.020530 5043 generic.go:334] "Generic (PLEG): container finished" podID="c0627b3a-26de-453c-ab7f-de79dae6c2fc" containerID="6a5bdd21ab37b51fea30586384891e1db8d95f0cf0d26974adae54aba711d02f" exitCode=1 Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.020615 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" event={"ID":"c0627b3a-26de-453c-ab7f-de79dae6c2fc","Type":"ContainerDied","Data":"6a5bdd21ab37b51fea30586384891e1db8d95f0cf0d26974adae54aba711d02f"} Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.021310 5043 scope.go:117] "RemoveContainer" containerID="6a5bdd21ab37b51fea30586384891e1db8d95f0cf0d26974adae54aba711d02f" Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.033375 5043 generic.go:334] "Generic (PLEG): container finished" podID="a3d7b5dc-2ced-4ac6-bdad-cd86342616a8" containerID="a4ee369f479f675860452c1383a4868f5569acf2e937320937f82299143a1feb" exitCode=1 Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.033465 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" event={"ID":"a3d7b5dc-2ced-4ac6-bdad-cd86342616a8","Type":"ContainerDied","Data":"a4ee369f479f675860452c1383a4868f5569acf2e937320937f82299143a1feb"} Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.034175 5043 scope.go:117] "RemoveContainer" containerID="a4ee369f479f675860452c1383a4868f5569acf2e937320937f82299143a1feb" Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.043052 5043 generic.go:334] "Generic (PLEG): container finished" podID="d845a43d-ee06-454f-b68d-cdb949cecffe" containerID="11c68e8a1546e0b3e4e2f97896785e727bd5de02fbb4f3be6d5b49405ed7b720" exitCode=1 Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.043118 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-54c548f75b-mk6ml" event={"ID":"d845a43d-ee06-454f-b68d-cdb949cecffe","Type":"ContainerDied","Data":"11c68e8a1546e0b3e4e2f97896785e727bd5de02fbb4f3be6d5b49405ed7b720"} Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.043947 5043 scope.go:117] "RemoveContainer" containerID="11c68e8a1546e0b3e4e2f97896785e727bd5de02fbb4f3be6d5b49405ed7b720" Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.047161 5043 generic.go:334] "Generic (PLEG): container finished" podID="b7005e58-64d2-470b-a3e7-22b67b7fbfb3" containerID="fd64632e50e5a9b5de31e554a161e37f1ff06f24eeef96edb33fd301f5e626d6" exitCode=1 Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.047233 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" event={"ID":"b7005e58-64d2-470b-a3e7-22b67b7fbfb3","Type":"ContainerDied","Data":"fd64632e50e5a9b5de31e554a161e37f1ff06f24eeef96edb33fd301f5e626d6"} Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.047801 5043 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26bc8613-79e6-42d4-b2ae-fe1c78a750fe" Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.047833 5043 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26bc8613-79e6-42d4-b2ae-fe1c78a750fe" Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.048429 5043 scope.go:117] "RemoveContainer" containerID="fd64632e50e5a9b5de31e554a161e37f1ff06f24eeef96edb33fd301f5e626d6" Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.058255 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:39:58 crc kubenswrapper[5043]: I1125 08:39:58.168216 5043 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2bb7b8fa-bbd2-413c-b4cb-7aa604d79222" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.059463 5043 generic.go:334] "Generic (PLEG): container finished" podID="a3d7b5dc-2ced-4ac6-bdad-cd86342616a8" containerID="5b1834faa1a607d3cf6c7ed86ce7ad541aac032a297a358bfa7c92e796635fd3" exitCode=1 Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.059545 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" event={"ID":"a3d7b5dc-2ced-4ac6-bdad-cd86342616a8","Type":"ContainerDied","Data":"5b1834faa1a607d3cf6c7ed86ce7ad541aac032a297a358bfa7c92e796635fd3"} Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.060247 5043 scope.go:117] "RemoveContainer" containerID="a4ee369f479f675860452c1383a4868f5569acf2e937320937f82299143a1feb" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.060990 5043 scope.go:117] "RemoveContainer" containerID="5b1834faa1a607d3cf6c7ed86ce7ad541aac032a297a358bfa7c92e796635fd3" Nov 25 08:39:59 crc kubenswrapper[5043]: E1125 08:39:59.061303 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-m9bmz_openstack-operators(a3d7b5dc-2ced-4ac6-bdad-cd86342616a8)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" podUID="a3d7b5dc-2ced-4ac6-bdad-cd86342616a8" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.065809 5043 generic.go:334] "Generic (PLEG): container finished" podID="cdc9a1bf-b6d9-4a36-bcf8-55f87525da45" containerID="418d63b3070820e3106cc63d9452c1c39a312d438143ef40226f1d5e0aa87622" exitCode=1 Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.065879 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" event={"ID":"cdc9a1bf-b6d9-4a36-bcf8-55f87525da45","Type":"ContainerDied","Data":"418d63b3070820e3106cc63d9452c1c39a312d438143ef40226f1d5e0aa87622"} Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.067436 5043 scope.go:117] "RemoveContainer" containerID="418d63b3070820e3106cc63d9452c1c39a312d438143ef40226f1d5e0aa87622" Nov 25 08:39:59 crc kubenswrapper[5043]: E1125 08:39:59.067726 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-pnq4k_openstack-operators(cdc9a1bf-b6d9-4a36-bcf8-55f87525da45)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" podUID="cdc9a1bf-b6d9-4a36-bcf8-55f87525da45" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.072892 5043 generic.go:334] "Generic (PLEG): container finished" podID="c924fa47-53fb-4edc-8214-667ba1858ca2" containerID="5501422c8e9790f59c9175ea12b58c36c7e59cea2a4f35de4f35deb58e75ab2b" exitCode=1 Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.072977 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" event={"ID":"c924fa47-53fb-4edc-8214-667ba1858ca2","Type":"ContainerDied","Data":"5501422c8e9790f59c9175ea12b58c36c7e59cea2a4f35de4f35deb58e75ab2b"} Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.073755 5043 scope.go:117] "RemoveContainer" containerID="5501422c8e9790f59c9175ea12b58c36c7e59cea2a4f35de4f35deb58e75ab2b" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.078440 5043 generic.go:334] "Generic (PLEG): container finished" podID="ff874d31-8e5a-4c0b-8f9c-e63513a00483" containerID="e90f40567bb829ac55e91c9fe5e7c8a0910c4679ffb311eba4f2318a447ffb4b" exitCode=1 Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.078516 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" event={"ID":"ff874d31-8e5a-4c0b-8f9c-e63513a00483","Type":"ContainerDied","Data":"e90f40567bb829ac55e91c9fe5e7c8a0910c4679ffb311eba4f2318a447ffb4b"} Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.078940 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.079124 5043 scope.go:117] "RemoveContainer" containerID="e90f40567bb829ac55e91c9fe5e7c8a0910c4679ffb311eba4f2318a447ffb4b" Nov 25 08:39:59 crc kubenswrapper[5043]: E1125 08:39:59.079417 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-gvwj8_openstack-operators(ff874d31-8e5a-4c0b-8f9c-e63513a00483)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" podUID="ff874d31-8e5a-4c0b-8f9c-e63513a00483" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.079812 5043 scope.go:117] "RemoveContainer" containerID="a66b9b201d4d0bf19d72cd0aff83a2f14a28a6983ae1fc9474a2bf452a5cfb8b" Nov 25 08:39:59 crc kubenswrapper[5043]: E1125 08:39:59.080106 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-85bdd6cc97-lrkkr_metallb-system(cdbab2e0-494c-4845-a500-88b26934f1c7)\"" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" podUID="cdbab2e0-494c-4845-a500-88b26934f1c7" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.084171 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" event={"ID":"c0627b3a-26de-453c-ab7f-de79dae6c2fc","Type":"ContainerStarted","Data":"55994c4ef4d5b5fccea356e19783673563dee650e0acae432cf02af52647e58e"} Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.085663 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.093581 5043 generic.go:334] "Generic (PLEG): container finished" podID="6411a018-19de-4fba-bf72-6dfd5bd2ce29" containerID="fd664c3c8f603b160c088ca6f5c54f8b9055cc009c04f010450910fb0b0f3cf5" exitCode=1 Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.093715 5043 scope.go:117] "RemoveContainer" containerID="e7b731d42024281828b164ebdbdddedefd6c195e2846703f49280395adb6a405" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.093761 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" event={"ID":"6411a018-19de-4fba-bf72-6dfd5bd2ce29","Type":"ContainerDied","Data":"fd664c3c8f603b160c088ca6f5c54f8b9055cc009c04f010450910fb0b0f3cf5"} Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.094523 5043 scope.go:117] "RemoveContainer" containerID="fd664c3c8f603b160c088ca6f5c54f8b9055cc009c04f010450910fb0b0f3cf5" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.103438 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-54c548f75b-mk6ml" event={"ID":"d845a43d-ee06-454f-b68d-cdb949cecffe","Type":"ContainerStarted","Data":"be873cb809f6d2183542136b5acf11b68d716fb928609adc23b2cf4031227575"} Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.103728 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-54c548f75b-mk6ml" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.116203 5043 generic.go:334] "Generic (PLEG): container finished" podID="b7005e58-64d2-470b-a3e7-22b67b7fbfb3" containerID="2b8d31e717a217a52b718a38460cbc5c42b88e6a532e010c35a89516f1253629" exitCode=1 Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.116261 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" event={"ID":"b7005e58-64d2-470b-a3e7-22b67b7fbfb3","Type":"ContainerDied","Data":"2b8d31e717a217a52b718a38460cbc5c42b88e6a532e010c35a89516f1253629"} Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.116992 5043 scope.go:117] "RemoveContainer" containerID="2b8d31e717a217a52b718a38460cbc5c42b88e6a532e010c35a89516f1253629" Nov 25 08:39:59 crc kubenswrapper[5043]: E1125 08:39:59.117347 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-sgz96_openstack-operators(b7005e58-64d2-470b-a3e7-22b67b7fbfb3)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" podUID="b7005e58-64d2-470b-a3e7-22b67b7fbfb3" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.135999 5043 generic.go:334] "Generic (PLEG): container finished" podID="8ea2c827-762f-437d-ad30-a3568d7a4af1" containerID="315598ad87877ef7a91d370f5eefa3dfb634bd2f70eb65f39487636965222df9" exitCode=1 Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.136206 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" event={"ID":"8ea2c827-762f-437d-ad30-a3568d7a4af1","Type":"ContainerDied","Data":"315598ad87877ef7a91d370f5eefa3dfb634bd2f70eb65f39487636965222df9"} Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.137911 5043 scope.go:117] "RemoveContainer" containerID="315598ad87877ef7a91d370f5eefa3dfb634bd2f70eb65f39487636965222df9" Nov 25 08:39:59 crc kubenswrapper[5043]: E1125 08:39:59.138595 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-gmcsx_openstack-operators(8ea2c827-762f-437d-ad30-a3568d7a4af1)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" podUID="8ea2c827-762f-437d-ad30-a3568d7a4af1" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.145200 5043 generic.go:334] "Generic (PLEG): container finished" podID="d643e47d-246d-4551-a63c-9b9374e684b2" containerID="ceee2427bb2d7a063bcdb56c9ca4e152d5ac06d7e8ed4c750b2a0673d719cf67" exitCode=1 Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.145268 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" event={"ID":"d643e47d-246d-4551-a63c-9b9374e684b2","Type":"ContainerDied","Data":"ceee2427bb2d7a063bcdb56c9ca4e152d5ac06d7e8ed4c750b2a0673d719cf67"} Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.145851 5043 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26bc8613-79e6-42d4-b2ae-fe1c78a750fe" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.145876 5043 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26bc8613-79e6-42d4-b2ae-fe1c78a750fe" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.146332 5043 scope.go:117] "RemoveContainer" containerID="ceee2427bb2d7a063bcdb56c9ca4e152d5ac06d7e8ed4c750b2a0673d719cf67" Nov 25 08:39:59 crc kubenswrapper[5043]: E1125 08:39:59.146621 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-mk7wm_openstack-operators(d643e47d-246d-4551-a63c-9b9374e684b2)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" podUID="d643e47d-246d-4551-a63c-9b9374e684b2" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.191749 5043 scope.go:117] "RemoveContainer" containerID="c116bccae148120904962a84cc7cadffc6f0e0b759da2d7c946cf5e5381228e7" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.338301 5043 scope.go:117] "RemoveContainer" containerID="fd64632e50e5a9b5de31e554a161e37f1ff06f24eeef96edb33fd301f5e626d6" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.411874 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="c937dff6-4203-455c-b07a-ec16e23c746f" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.412245 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.413234 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"8ddb83870375f48375f48b386063ff8de6ab9b6ac9057b75f3108e841157a4b8"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.413272 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c937dff6-4203-455c-b07a-ec16e23c746f" containerName="kube-state-metrics" containerID="cri-o://8ddb83870375f48375f48b386063ff8de6ab9b6ac9057b75f3108e841157a4b8" gracePeriod=30 Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.448080 5043 scope.go:117] "RemoveContainer" containerID="3e8f789775d000389ff8a62066f32344548d5cac70d5dab18156d3183ce897a3" Nov 25 08:39:59 crc kubenswrapper[5043]: I1125 08:39:59.558061 5043 scope.go:117] "RemoveContainer" containerID="10181b15c5c44ae9e997127bb8a7167c3baf07b67b7d069085dc1bb6dc879907" Nov 25 08:40:00 crc kubenswrapper[5043]: I1125 08:40:00.165487 5043 generic.go:334] "Generic (PLEG): container finished" podID="6411a018-19de-4fba-bf72-6dfd5bd2ce29" containerID="2a5a51a743d574b33137d008b87eb81a93ebcc70a73b7deb50c7a924bfb17bc7" exitCode=1 Nov 25 08:40:00 crc kubenswrapper[5043]: I1125 08:40:00.165547 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" event={"ID":"6411a018-19de-4fba-bf72-6dfd5bd2ce29","Type":"ContainerDied","Data":"2a5a51a743d574b33137d008b87eb81a93ebcc70a73b7deb50c7a924bfb17bc7"} Nov 25 08:40:00 crc kubenswrapper[5043]: I1125 08:40:00.165907 5043 scope.go:117] "RemoveContainer" containerID="fd664c3c8f603b160c088ca6f5c54f8b9055cc009c04f010450910fb0b0f3cf5" Nov 25 08:40:00 crc kubenswrapper[5043]: I1125 08:40:00.166569 5043 scope.go:117] "RemoveContainer" containerID="2a5a51a743d574b33137d008b87eb81a93ebcc70a73b7deb50c7a924bfb17bc7" Nov 25 08:40:00 crc kubenswrapper[5043]: E1125 08:40:00.166943 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-fmplr_openstack-operators(6411a018-19de-4fba-bf72-6dfd5bd2ce29)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" podUID="6411a018-19de-4fba-bf72-6dfd5bd2ce29" Nov 25 08:40:00 crc kubenswrapper[5043]: I1125 08:40:00.169406 5043 generic.go:334] "Generic (PLEG): container finished" podID="c937dff6-4203-455c-b07a-ec16e23c746f" containerID="8ddb83870375f48375f48b386063ff8de6ab9b6ac9057b75f3108e841157a4b8" exitCode=2 Nov 25 08:40:00 crc kubenswrapper[5043]: I1125 08:40:00.169529 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c937dff6-4203-455c-b07a-ec16e23c746f","Type":"ContainerDied","Data":"8ddb83870375f48375f48b386063ff8de6ab9b6ac9057b75f3108e841157a4b8"} Nov 25 08:40:00 crc kubenswrapper[5043]: I1125 08:40:00.169659 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c937dff6-4203-455c-b07a-ec16e23c746f","Type":"ContainerStarted","Data":"f2aad0f27276ce5eaf29889004b5faf317e887f56e64b74efabbdbe3cd282d4e"} Nov 25 08:40:00 crc kubenswrapper[5043]: I1125 08:40:00.169831 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 08:40:00 crc kubenswrapper[5043]: I1125 08:40:00.173140 5043 generic.go:334] "Generic (PLEG): container finished" podID="c924fa47-53fb-4edc-8214-667ba1858ca2" containerID="20df5d09f6e301696a267ef62018a6daaa66457438569ca0c78de2b61d698b03" exitCode=1 Nov 25 08:40:00 crc kubenswrapper[5043]: I1125 08:40:00.173207 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" event={"ID":"c924fa47-53fb-4edc-8214-667ba1858ca2","Type":"ContainerDied","Data":"20df5d09f6e301696a267ef62018a6daaa66457438569ca0c78de2b61d698b03"} Nov 25 08:40:00 crc kubenswrapper[5043]: I1125 08:40:00.174250 5043 scope.go:117] "RemoveContainer" containerID="20df5d09f6e301696a267ef62018a6daaa66457438569ca0c78de2b61d698b03" Nov 25 08:40:00 crc kubenswrapper[5043]: E1125 08:40:00.174485 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-xx8rb_openstack-operators(c924fa47-53fb-4edc-8214-667ba1858ca2)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" podUID="c924fa47-53fb-4edc-8214-667ba1858ca2" Nov 25 08:40:00 crc kubenswrapper[5043]: I1125 08:40:00.225554 5043 scope.go:117] "RemoveContainer" containerID="5501422c8e9790f59c9175ea12b58c36c7e59cea2a4f35de4f35deb58e75ab2b" Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.186240 5043 generic.go:334] "Generic (PLEG): container finished" podID="c937dff6-4203-455c-b07a-ec16e23c746f" containerID="f2aad0f27276ce5eaf29889004b5faf317e887f56e64b74efabbdbe3cd282d4e" exitCode=1 Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.186318 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c937dff6-4203-455c-b07a-ec16e23c746f","Type":"ContainerDied","Data":"f2aad0f27276ce5eaf29889004b5faf317e887f56e64b74efabbdbe3cd282d4e"} Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.186958 5043 scope.go:117] "RemoveContainer" containerID="f2aad0f27276ce5eaf29889004b5faf317e887f56e64b74efabbdbe3cd282d4e" Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.187892 5043 scope.go:117] "RemoveContainer" containerID="8ddb83870375f48375f48b386063ff8de6ab9b6ac9057b75f3108e841157a4b8" Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.194753 5043 generic.go:334] "Generic (PLEG): container finished" podID="020c7247-0b68-419b-b97f-f7b0ea800142" containerID="f820e8840659a45a5cf9fbf92f9fc7a11ed6d79d311dd0d20d00b98324a10cd9" exitCode=1 Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.194835 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" event={"ID":"020c7247-0b68-419b-b97f-f7b0ea800142","Type":"ContainerDied","Data":"f820e8840659a45a5cf9fbf92f9fc7a11ed6d79d311dd0d20d00b98324a10cd9"} Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.195455 5043 scope.go:117] "RemoveContainer" containerID="f820e8840659a45a5cf9fbf92f9fc7a11ed6d79d311dd0d20d00b98324a10cd9" Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.201165 5043 generic.go:334] "Generic (PLEG): container finished" podID="8a93d5b1-742c-4a37-94ef-a60ffb008520" containerID="7e6df44537f5c843dc7130d710043819e66cfec71a4a1934c0d83886f5196d5d" exitCode=1 Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.201231 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" event={"ID":"8a93d5b1-742c-4a37-94ef-a60ffb008520","Type":"ContainerDied","Data":"7e6df44537f5c843dc7130d710043819e66cfec71a4a1934c0d83886f5196d5d"} Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.203117 5043 scope.go:117] "RemoveContainer" containerID="7e6df44537f5c843dc7130d710043819e66cfec71a4a1934c0d83886f5196d5d" Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.206910 5043 generic.go:334] "Generic (PLEG): container finished" podID="e5c62587-28b4-4a1e-8b73-ee9624ca7163" containerID="9c6dbeb38cd3dfb8c343653925e36700b00d27f719b0dd1dbb1d0052b8e48f60" exitCode=1 Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.206993 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" event={"ID":"e5c62587-28b4-4a1e-8b73-ee9624ca7163","Type":"ContainerDied","Data":"9c6dbeb38cd3dfb8c343653925e36700b00d27f719b0dd1dbb1d0052b8e48f60"} Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.207406 5043 scope.go:117] "RemoveContainer" containerID="9c6dbeb38cd3dfb8c343653925e36700b00d27f719b0dd1dbb1d0052b8e48f60" Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.214408 5043 generic.go:334] "Generic (PLEG): container finished" podID="17e00d26-c8ad-4dfd-90df-8705b2cb2bde" containerID="c4c834e2e83d7758640db27c2817e3c118bbd2322c6c3cb503f8309435e4f170" exitCode=1 Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.214625 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" event={"ID":"17e00d26-c8ad-4dfd-90df-8705b2cb2bde","Type":"ContainerDied","Data":"c4c834e2e83d7758640db27c2817e3c118bbd2322c6c3cb503f8309435e4f170"} Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.215321 5043 scope.go:117] "RemoveContainer" containerID="c4c834e2e83d7758640db27c2817e3c118bbd2322c6c3cb503f8309435e4f170" Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.227080 5043 generic.go:334] "Generic (PLEG): container finished" podID="e020a857-3730-44f5-8e98-3e59868fbde6" containerID="8e22f79993da45fc8bf69d68104a5e249ca19914d332d0828ecbb0a241aa87ea" exitCode=1 Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.227284 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" event={"ID":"e020a857-3730-44f5-8e98-3e59868fbde6","Type":"ContainerDied","Data":"8e22f79993da45fc8bf69d68104a5e249ca19914d332d0828ecbb0a241aa87ea"} Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.228363 5043 scope.go:117] "RemoveContainer" containerID="8e22f79993da45fc8bf69d68104a5e249ca19914d332d0828ecbb0a241aa87ea" Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.230110 5043 generic.go:334] "Generic (PLEG): container finished" podID="92e57762-522f-4a9d-8b03-732ba4dad5c1" containerID="fdcc2555f3276b445e3eb69ddab4960acc10bdfc496bc930b16e6fbbdd02dd14" exitCode=1 Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.230188 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" event={"ID":"92e57762-522f-4a9d-8b03-732ba4dad5c1","Type":"ContainerDied","Data":"fdcc2555f3276b445e3eb69ddab4960acc10bdfc496bc930b16e6fbbdd02dd14"} Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.231153 5043 scope.go:117] "RemoveContainer" containerID="fdcc2555f3276b445e3eb69ddab4960acc10bdfc496bc930b16e6fbbdd02dd14" Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.232474 5043 generic.go:334] "Generic (PLEG): container finished" podID="c20803a7-e9a9-441a-9e61-84673f3c02e8" containerID="dd56f467de78cb2cee958e6ac10eab27df30928770e51472e78d51e88114f549" exitCode=1 Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.232578 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" event={"ID":"c20803a7-e9a9-441a-9e61-84673f3c02e8","Type":"ContainerDied","Data":"dd56f467de78cb2cee958e6ac10eab27df30928770e51472e78d51e88114f549"} Nov 25 08:40:01 crc kubenswrapper[5043]: I1125 08:40:01.233675 5043 scope.go:117] "RemoveContainer" containerID="dd56f467de78cb2cee958e6ac10eab27df30928770e51472e78d51e88114f549" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.262766 5043 generic.go:334] "Generic (PLEG): container finished" podID="92e57762-522f-4a9d-8b03-732ba4dad5c1" containerID="2b6615190db3121e68c7a7cd0e99fed23a8bf05b101ec606c86a00e77c75c8a9" exitCode=1 Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.263461 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" event={"ID":"92e57762-522f-4a9d-8b03-732ba4dad5c1","Type":"ContainerDied","Data":"2b6615190db3121e68c7a7cd0e99fed23a8bf05b101ec606c86a00e77c75c8a9"} Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.263505 5043 scope.go:117] "RemoveContainer" containerID="fdcc2555f3276b445e3eb69ddab4960acc10bdfc496bc930b16e6fbbdd02dd14" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.264300 5043 scope.go:117] "RemoveContainer" containerID="2b6615190db3121e68c7a7cd0e99fed23a8bf05b101ec606c86a00e77c75c8a9" Nov 25 08:40:02 crc kubenswrapper[5043]: E1125 08:40:02.264644 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-d5cc86f4b-x8q8x_openstack-operators(92e57762-522f-4a9d-8b03-732ba4dad5c1)\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" podUID="92e57762-522f-4a9d-8b03-732ba4dad5c1" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.269194 5043 generic.go:334] "Generic (PLEG): container finished" podID="e020a857-3730-44f5-8e98-3e59868fbde6" containerID="5b4464aa4cfca702fc03e8ece643f773f179357653d294f57ff570a3d1baa3c9" exitCode=1 Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.269279 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" event={"ID":"e020a857-3730-44f5-8e98-3e59868fbde6","Type":"ContainerDied","Data":"5b4464aa4cfca702fc03e8ece643f773f179357653d294f57ff570a3d1baa3c9"} Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.269983 5043 scope.go:117] "RemoveContainer" containerID="5b4464aa4cfca702fc03e8ece643f773f179357653d294f57ff570a3d1baa3c9" Nov 25 08:40:02 crc kubenswrapper[5043]: E1125 08:40:02.270281 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-5mp5h_openstack-operators(e020a857-3730-44f5-8e98-3e59868fbde6)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" podUID="e020a857-3730-44f5-8e98-3e59868fbde6" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.272800 5043 generic.go:334] "Generic (PLEG): container finished" podID="f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4" containerID="dcf42ae8791a95a572ce1da470fe184394a8ec4a5f1857651e57f15991574a1c" exitCode=1 Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.272888 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" event={"ID":"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4","Type":"ContainerDied","Data":"dcf42ae8791a95a572ce1da470fe184394a8ec4a5f1857651e57f15991574a1c"} Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.273638 5043 scope.go:117] "RemoveContainer" containerID="dcf42ae8791a95a572ce1da470fe184394a8ec4a5f1857651e57f15991574a1c" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.280868 5043 generic.go:334] "Generic (PLEG): container finished" podID="e5c62587-28b4-4a1e-8b73-ee9624ca7163" containerID="51e626a6f95525624307e401f5fc76fbf6aaae3a2a28eefce2607b2d7b70b1a7" exitCode=1 Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.280964 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" event={"ID":"e5c62587-28b4-4a1e-8b73-ee9624ca7163","Type":"ContainerDied","Data":"51e626a6f95525624307e401f5fc76fbf6aaae3a2a28eefce2607b2d7b70b1a7"} Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.281829 5043 scope.go:117] "RemoveContainer" containerID="51e626a6f95525624307e401f5fc76fbf6aaae3a2a28eefce2607b2d7b70b1a7" Nov 25 08:40:02 crc kubenswrapper[5043]: E1125 08:40:02.282139 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-nnpzz_openstack-operators(e5c62587-28b4-4a1e-8b73-ee9624ca7163)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" podUID="e5c62587-28b4-4a1e-8b73-ee9624ca7163" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.285626 5043 generic.go:334] "Generic (PLEG): container finished" podID="d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2" containerID="78a66f3e10bd00d358084f26e00fb47a905588a937766c24964ad53150b7d565" exitCode=1 Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.285709 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" event={"ID":"d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2","Type":"ContainerDied","Data":"78a66f3e10bd00d358084f26e00fb47a905588a937766c24964ad53150b7d565"} Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.286407 5043 scope.go:117] "RemoveContainer" containerID="78a66f3e10bd00d358084f26e00fb47a905588a937766c24964ad53150b7d565" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.290350 5043 generic.go:334] "Generic (PLEG): container finished" podID="9c9e4471-0205-478a-8717-be36a19d2a02" containerID="4e788d44528e241b3b9fafc1ecc8bf8d25b10d37ecf73e193d0faac1929de95f" exitCode=1 Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.290407 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" event={"ID":"9c9e4471-0205-478a-8717-be36a19d2a02","Type":"ContainerDied","Data":"4e788d44528e241b3b9fafc1ecc8bf8d25b10d37ecf73e193d0faac1929de95f"} Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.290999 5043 scope.go:117] "RemoveContainer" containerID="4e788d44528e241b3b9fafc1ecc8bf8d25b10d37ecf73e193d0faac1929de95f" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.293320 5043 generic.go:334] "Generic (PLEG): container finished" podID="869f93a1-d6e7-46ff-a60f-0e997412a2fa" containerID="d21834bce874b3701dc38b9c5066bb896f13eb3ac417fbb76c592d954a213e6e" exitCode=1 Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.293375 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" event={"ID":"869f93a1-d6e7-46ff-a60f-0e997412a2fa","Type":"ContainerDied","Data":"d21834bce874b3701dc38b9c5066bb896f13eb3ac417fbb76c592d954a213e6e"} Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.293973 5043 scope.go:117] "RemoveContainer" containerID="d21834bce874b3701dc38b9c5066bb896f13eb3ac417fbb76c592d954a213e6e" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.297792 5043 scope.go:117] "RemoveContainer" containerID="8e22f79993da45fc8bf69d68104a5e249ca19914d332d0828ecbb0a241aa87ea" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.301419 5043 generic.go:334] "Generic (PLEG): container finished" podID="020c7247-0b68-419b-b97f-f7b0ea800142" containerID="d36884295a82377865704102aa82a67dad68911fbfc9b53889367b5c37660321" exitCode=1 Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.301481 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" event={"ID":"020c7247-0b68-419b-b97f-f7b0ea800142","Type":"ContainerDied","Data":"d36884295a82377865704102aa82a67dad68911fbfc9b53889367b5c37660321"} Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.302079 5043 scope.go:117] "RemoveContainer" containerID="d36884295a82377865704102aa82a67dad68911fbfc9b53889367b5c37660321" Nov 25 08:40:02 crc kubenswrapper[5043]: E1125 08:40:02.302361 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-dxd2x_openstack-operators(020c7247-0b68-419b-b97f-f7b0ea800142)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" podUID="020c7247-0b68-419b-b97f-f7b0ea800142" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.310425 5043 generic.go:334] "Generic (PLEG): container finished" podID="8a93d5b1-742c-4a37-94ef-a60ffb008520" containerID="2b1094a6d63348ca08ff2872cb76f6b03b0723cd906b7b376062d41f51dc2e38" exitCode=1 Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.310497 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" event={"ID":"8a93d5b1-742c-4a37-94ef-a60ffb008520","Type":"ContainerDied","Data":"2b1094a6d63348ca08ff2872cb76f6b03b0723cd906b7b376062d41f51dc2e38"} Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.315420 5043 scope.go:117] "RemoveContainer" containerID="2b1094a6d63348ca08ff2872cb76f6b03b0723cd906b7b376062d41f51dc2e38" Nov 25 08:40:02 crc kubenswrapper[5043]: E1125 08:40:02.315992 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-l77gb_openstack-operators(8a93d5b1-742c-4a37-94ef-a60ffb008520)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" podUID="8a93d5b1-742c-4a37-94ef-a60ffb008520" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.327125 5043 generic.go:334] "Generic (PLEG): container finished" podID="17e00d26-c8ad-4dfd-90df-8705b2cb2bde" containerID="ff672824b0306addcc741e64411cd7b2199ee4541c1862397b479d2337a4a7eb" exitCode=1 Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.327240 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" event={"ID":"17e00d26-c8ad-4dfd-90df-8705b2cb2bde","Type":"ContainerDied","Data":"ff672824b0306addcc741e64411cd7b2199ee4541c1862397b479d2337a4a7eb"} Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.327995 5043 scope.go:117] "RemoveContainer" containerID="ff672824b0306addcc741e64411cd7b2199ee4541c1862397b479d2337a4a7eb" Nov 25 08:40:02 crc kubenswrapper[5043]: E1125 08:40:02.328324 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-54g5x_openstack-operators(17e00d26-c8ad-4dfd-90df-8705b2cb2bde)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" podUID="17e00d26-c8ad-4dfd-90df-8705b2cb2bde" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.335575 5043 generic.go:334] "Generic (PLEG): container finished" podID="c937dff6-4203-455c-b07a-ec16e23c746f" containerID="83d85ea001b26054663afb056e081c7222bf59af8cb5e346274ad3959f7fb60c" exitCode=1 Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.335764 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c937dff6-4203-455c-b07a-ec16e23c746f","Type":"ContainerDied","Data":"83d85ea001b26054663afb056e081c7222bf59af8cb5e346274ad3959f7fb60c"} Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.336424 5043 scope.go:117] "RemoveContainer" containerID="83d85ea001b26054663afb056e081c7222bf59af8cb5e346274ad3959f7fb60c" Nov 25 08:40:02 crc kubenswrapper[5043]: E1125 08:40:02.336733 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(c937dff6-4203-455c-b07a-ec16e23c746f)\"" pod="openstack/kube-state-metrics-0" podUID="c937dff6-4203-455c-b07a-ec16e23c746f" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.342527 5043 generic.go:334] "Generic (PLEG): container finished" podID="d9a368e6-f4bb-4896-9a2d-f7ceed65e933" containerID="09a9187c3cd96f5127ba1f2c8ddac211c9e9a4971656966882053dc5114494a7" exitCode=1 Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.342708 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" event={"ID":"d9a368e6-f4bb-4896-9a2d-f7ceed65e933","Type":"ContainerDied","Data":"09a9187c3cd96f5127ba1f2c8ddac211c9e9a4971656966882053dc5114494a7"} Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.343519 5043 scope.go:117] "RemoveContainer" containerID="09a9187c3cd96f5127ba1f2c8ddac211c9e9a4971656966882053dc5114494a7" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.346665 5043 generic.go:334] "Generic (PLEG): container finished" podID="bb800a2f-1864-47be-931b-7b99f7c7354f" containerID="643f940ad9b1b4bf6af5490ee68c3afd5242e986566d067c28b39d69b190fd3c" exitCode=1 Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.346757 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" event={"ID":"bb800a2f-1864-47be-931b-7b99f7c7354f","Type":"ContainerDied","Data":"643f940ad9b1b4bf6af5490ee68c3afd5242e986566d067c28b39d69b190fd3c"} Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.347488 5043 scope.go:117] "RemoveContainer" containerID="643f940ad9b1b4bf6af5490ee68c3afd5242e986566d067c28b39d69b190fd3c" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.366353 5043 generic.go:334] "Generic (PLEG): container finished" podID="c20803a7-e9a9-441a-9e61-84673f3c02e8" containerID="219b30f2d353e7768b4f2759650e5612797d882748b45af8eee5d449011683a6" exitCode=1 Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.366403 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" event={"ID":"c20803a7-e9a9-441a-9e61-84673f3c02e8","Type":"ContainerDied","Data":"219b30f2d353e7768b4f2759650e5612797d882748b45af8eee5d449011683a6"} Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.367236 5043 scope.go:117] "RemoveContainer" containerID="219b30f2d353e7768b4f2759650e5612797d882748b45af8eee5d449011683a6" Nov 25 08:40:02 crc kubenswrapper[5043]: E1125 08:40:02.367534 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-wmkmw_openstack-operators(c20803a7-e9a9-441a-9e61-84673f3c02e8)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" podUID="c20803a7-e9a9-441a-9e61-84673f3c02e8" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.554396 5043 scope.go:117] "RemoveContainer" containerID="9c6dbeb38cd3dfb8c343653925e36700b00d27f719b0dd1dbb1d0052b8e48f60" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.587093 5043 scope.go:117] "RemoveContainer" containerID="f820e8840659a45a5cf9fbf92f9fc7a11ed6d79d311dd0d20d00b98324a10cd9" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.650508 5043 scope.go:117] "RemoveContainer" containerID="7e6df44537f5c843dc7130d710043819e66cfec71a4a1934c0d83886f5196d5d" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.685876 5043 scope.go:117] "RemoveContainer" containerID="c4c834e2e83d7758640db27c2817e3c118bbd2322c6c3cb503f8309435e4f170" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.741963 5043 scope.go:117] "RemoveContainer" containerID="f2aad0f27276ce5eaf29889004b5faf317e887f56e64b74efabbdbe3cd282d4e" Nov 25 08:40:02 crc kubenswrapper[5043]: I1125 08:40:02.800377 5043 scope.go:117] "RemoveContainer" containerID="dd56f467de78cb2cee958e6ac10eab27df30928770e51472e78d51e88114f549" Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.392300 5043 scope.go:117] "RemoveContainer" containerID="83d85ea001b26054663afb056e081c7222bf59af8cb5e346274ad3959f7fb60c" Nov 25 08:40:03 crc kubenswrapper[5043]: E1125 08:40:03.393211 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(c937dff6-4203-455c-b07a-ec16e23c746f)\"" pod="openstack/kube-state-metrics-0" podUID="c937dff6-4203-455c-b07a-ec16e23c746f" Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.397631 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" event={"ID":"d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2","Type":"ContainerDied","Data":"b623b8f8e70abb46f80a58bb0910332852f3dcd2ed61a065369be4b1d02fd550"} Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.397724 5043 scope.go:117] "RemoveContainer" containerID="78a66f3e10bd00d358084f26e00fb47a905588a937766c24964ad53150b7d565" Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.397491 5043 generic.go:334] "Generic (PLEG): container finished" podID="d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2" containerID="b623b8f8e70abb46f80a58bb0910332852f3dcd2ed61a065369be4b1d02fd550" exitCode=1 Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.398762 5043 scope.go:117] "RemoveContainer" containerID="b623b8f8e70abb46f80a58bb0910332852f3dcd2ed61a065369be4b1d02fd550" Nov 25 08:40:03 crc kubenswrapper[5043]: E1125 08:40:03.399246 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-d5ffq_openstack-operators(d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" podUID="d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2" Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.401860 5043 generic.go:334] "Generic (PLEG): container finished" podID="9c9e4471-0205-478a-8717-be36a19d2a02" containerID="5be4d2c65ed07d83395c8339e2244c63ee1c304a151f28753806b4ca4254ac16" exitCode=1 Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.401948 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" event={"ID":"9c9e4471-0205-478a-8717-be36a19d2a02","Type":"ContainerDied","Data":"5be4d2c65ed07d83395c8339e2244c63ee1c304a151f28753806b4ca4254ac16"} Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.403264 5043 scope.go:117] "RemoveContainer" containerID="5be4d2c65ed07d83395c8339e2244c63ee1c304a151f28753806b4ca4254ac16" Nov 25 08:40:03 crc kubenswrapper[5043]: E1125 08:40:03.403773 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-tdzr2_openstack-operators(9c9e4471-0205-478a-8717-be36a19d2a02)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" podUID="9c9e4471-0205-478a-8717-be36a19d2a02" Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.408825 5043 generic.go:334] "Generic (PLEG): container finished" podID="869f93a1-d6e7-46ff-a60f-0e997412a2fa" containerID="e5c7582397dfb9c316ef33c3d3ed2942538bce06505f079fae351c74b56cc658" exitCode=1 Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.408893 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" event={"ID":"869f93a1-d6e7-46ff-a60f-0e997412a2fa","Type":"ContainerDied","Data":"e5c7582397dfb9c316ef33c3d3ed2942538bce06505f079fae351c74b56cc658"} Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.410371 5043 scope.go:117] "RemoveContainer" containerID="e5c7582397dfb9c316ef33c3d3ed2942538bce06505f079fae351c74b56cc658" Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.411494 5043 generic.go:334] "Generic (PLEG): container finished" podID="f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4" containerID="7031ab5226215a70b04689e908f34d503377ffe0d9a02bde65e3a43a0b1197fb" exitCode=1 Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.411567 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" event={"ID":"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4","Type":"ContainerDied","Data":"7031ab5226215a70b04689e908f34d503377ffe0d9a02bde65e3a43a0b1197fb"} Nov 25 08:40:03 crc kubenswrapper[5043]: E1125 08:40:03.411910 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-6w2db_openstack-operators(869f93a1-d6e7-46ff-a60f-0e997412a2fa)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" podUID="869f93a1-d6e7-46ff-a60f-0e997412a2fa" Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.412362 5043 scope.go:117] "RemoveContainer" containerID="7031ab5226215a70b04689e908f34d503377ffe0d9a02bde65e3a43a0b1197fb" Nov 25 08:40:03 crc kubenswrapper[5043]: E1125 08:40:03.412696 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-7cd5954d9-5zklz_openstack-operators(f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4)\"" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" podUID="f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4" Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.424882 5043 generic.go:334] "Generic (PLEG): container finished" podID="d9a368e6-f4bb-4896-9a2d-f7ceed65e933" containerID="088d46c362950d22f0bb9a8bf2ec55618b2fa18486bccd11bc3d6a186cfb0d39" exitCode=1 Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.424955 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" event={"ID":"d9a368e6-f4bb-4896-9a2d-f7ceed65e933","Type":"ContainerDied","Data":"088d46c362950d22f0bb9a8bf2ec55618b2fa18486bccd11bc3d6a186cfb0d39"} Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.425678 5043 scope.go:117] "RemoveContainer" containerID="088d46c362950d22f0bb9a8bf2ec55618b2fa18486bccd11bc3d6a186cfb0d39" Nov 25 08:40:03 crc kubenswrapper[5043]: E1125 08:40:03.425981 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-dtcj4_openstack-operators(d9a368e6-f4bb-4896-9a2d-f7ceed65e933)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" podUID="d9a368e6-f4bb-4896-9a2d-f7ceed65e933" Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.438748 5043 generic.go:334] "Generic (PLEG): container finished" podID="bb800a2f-1864-47be-931b-7b99f7c7354f" containerID="9c2bf3be8a679d4e661c93f2c49960b5e833ad34e77447afe5a38d7f06c93626" exitCode=1 Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.438810 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" event={"ID":"bb800a2f-1864-47be-931b-7b99f7c7354f","Type":"ContainerDied","Data":"9c2bf3be8a679d4e661c93f2c49960b5e833ad34e77447afe5a38d7f06c93626"} Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.439567 5043 scope.go:117] "RemoveContainer" containerID="9c2bf3be8a679d4e661c93f2c49960b5e833ad34e77447afe5a38d7f06c93626" Nov 25 08:40:03 crc kubenswrapper[5043]: E1125 08:40:03.439829 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-l5vz2_openstack-operators(bb800a2f-1864-47be-931b-7b99f7c7354f)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" podUID="bb800a2f-1864-47be-931b-7b99f7c7354f" Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.463446 5043 scope.go:117] "RemoveContainer" containerID="4e788d44528e241b3b9fafc1ecc8bf8d25b10d37ecf73e193d0faac1929de95f" Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.529398 5043 scope.go:117] "RemoveContainer" containerID="d21834bce874b3701dc38b9c5066bb896f13eb3ac417fbb76c592d954a213e6e" Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.585637 5043 scope.go:117] "RemoveContainer" containerID="dcf42ae8791a95a572ce1da470fe184394a8ec4a5f1857651e57f15991574a1c" Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.651655 5043 scope.go:117] "RemoveContainer" containerID="09a9187c3cd96f5127ba1f2c8ddac211c9e9a4971656966882053dc5114494a7" Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.687693 5043 scope.go:117] "RemoveContainer" containerID="643f940ad9b1b4bf6af5490ee68c3afd5242e986566d067c28b39d69b190fd3c" Nov 25 08:40:03 crc kubenswrapper[5043]: I1125 08:40:03.962749 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:40:03 crc kubenswrapper[5043]: E1125 08:40:03.963118 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.574172 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.575541 5043 scope.go:117] "RemoveContainer" containerID="088d46c362950d22f0bb9a8bf2ec55618b2fa18486bccd11bc3d6a186cfb0d39" Nov 25 08:40:05 crc kubenswrapper[5043]: E1125 08:40:05.575904 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-dtcj4_openstack-operators(d9a368e6-f4bb-4896-9a2d-f7ceed65e933)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" podUID="d9a368e6-f4bb-4896-9a2d-f7ceed65e933" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.590856 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.591882 5043 scope.go:117] "RemoveContainer" containerID="418d63b3070820e3106cc63d9452c1c39a312d438143ef40226f1d5e0aa87622" Nov 25 08:40:05 crc kubenswrapper[5043]: E1125 08:40:05.592276 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-pnq4k_openstack-operators(cdc9a1bf-b6d9-4a36-bcf8-55f87525da45)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" podUID="cdc9a1bf-b6d9-4a36-bcf8-55f87525da45" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.633206 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.633984 5043 scope.go:117] "RemoveContainer" containerID="5b4464aa4cfca702fc03e8ece643f773f179357653d294f57ff570a3d1baa3c9" Nov 25 08:40:05 crc kubenswrapper[5043]: E1125 08:40:05.634225 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-5mp5h_openstack-operators(e020a857-3730-44f5-8e98-3e59868fbde6)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" podUID="e020a857-3730-44f5-8e98-3e59868fbde6" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.654925 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.655687 5043 scope.go:117] "RemoveContainer" containerID="51e626a6f95525624307e401f5fc76fbf6aaae3a2a28eefce2607b2d7b70b1a7" Nov 25 08:40:05 crc kubenswrapper[5043]: E1125 08:40:05.655954 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-nnpzz_openstack-operators(e5c62587-28b4-4a1e-8b73-ee9624ca7163)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" podUID="e5c62587-28b4-4a1e-8b73-ee9624ca7163" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.687711 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.688506 5043 scope.go:117] "RemoveContainer" containerID="2b1094a6d63348ca08ff2872cb76f6b03b0723cd906b7b376062d41f51dc2e38" Nov 25 08:40:05 crc kubenswrapper[5043]: E1125 08:40:05.688789 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-l77gb_openstack-operators(8a93d5b1-742c-4a37-94ef-a60ffb008520)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" podUID="8a93d5b1-742c-4a37-94ef-a60ffb008520" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.709849 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.711067 5043 scope.go:117] "RemoveContainer" containerID="219b30f2d353e7768b4f2759650e5612797d882748b45af8eee5d449011683a6" Nov 25 08:40:05 crc kubenswrapper[5043]: E1125 08:40:05.711539 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-wmkmw_openstack-operators(c20803a7-e9a9-441a-9e61-84673f3c02e8)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" podUID="c20803a7-e9a9-441a-9e61-84673f3c02e8" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.774054 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.774843 5043 scope.go:117] "RemoveContainer" containerID="2b8d31e717a217a52b718a38460cbc5c42b88e6a532e010c35a89516f1253629" Nov 25 08:40:05 crc kubenswrapper[5043]: E1125 08:40:05.775086 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-sgz96_openstack-operators(b7005e58-64d2-470b-a3e7-22b67b7fbfb3)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" podUID="b7005e58-64d2-470b-a3e7-22b67b7fbfb3" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.800186 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.801003 5043 scope.go:117] "RemoveContainer" containerID="e90f40567bb829ac55e91c9fe5e7c8a0910c4679ffb311eba4f2318a447ffb4b" Nov 25 08:40:05 crc kubenswrapper[5043]: E1125 08:40:05.801314 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-gvwj8_openstack-operators(ff874d31-8e5a-4c0b-8f9c-e63513a00483)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" podUID="ff874d31-8e5a-4c0b-8f9c-e63513a00483" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.808926 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.809872 5043 scope.go:117] "RemoveContainer" containerID="20df5d09f6e301696a267ef62018a6daaa66457438569ca0c78de2b61d698b03" Nov 25 08:40:05 crc kubenswrapper[5043]: E1125 08:40:05.810173 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-xx8rb_openstack-operators(c924fa47-53fb-4edc-8214-667ba1858ca2)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" podUID="c924fa47-53fb-4edc-8214-667ba1858ca2" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.894583 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.895304 5043 scope.go:117] "RemoveContainer" containerID="5be4d2c65ed07d83395c8339e2244c63ee1c304a151f28753806b4ca4254ac16" Nov 25 08:40:05 crc kubenswrapper[5043]: E1125 08:40:05.895561 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-tdzr2_openstack-operators(9c9e4471-0205-478a-8717-be36a19d2a02)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" podUID="9c9e4471-0205-478a-8717-be36a19d2a02" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.986446 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" Nov 25 08:40:05 crc kubenswrapper[5043]: I1125 08:40:05.987179 5043 scope.go:117] "RemoveContainer" containerID="9c2bf3be8a679d4e661c93f2c49960b5e833ad34e77447afe5a38d7f06c93626" Nov 25 08:40:05 crc kubenswrapper[5043]: E1125 08:40:05.987433 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-l5vz2_openstack-operators(bb800a2f-1864-47be-931b-7b99f7c7354f)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" podUID="bb800a2f-1864-47be-931b-7b99f7c7354f" Nov 25 08:40:06 crc kubenswrapper[5043]: I1125 08:40:06.004595 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" Nov 25 08:40:06 crc kubenswrapper[5043]: I1125 08:40:06.005986 5043 scope.go:117] "RemoveContainer" containerID="5b1834faa1a607d3cf6c7ed86ce7ad541aac032a297a358bfa7c92e796635fd3" Nov 25 08:40:06 crc kubenswrapper[5043]: E1125 08:40:06.006467 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-m9bmz_openstack-operators(a3d7b5dc-2ced-4ac6-bdad-cd86342616a8)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" podUID="a3d7b5dc-2ced-4ac6-bdad-cd86342616a8" Nov 25 08:40:06 crc kubenswrapper[5043]: I1125 08:40:06.046054 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" Nov 25 08:40:06 crc kubenswrapper[5043]: I1125 08:40:06.046735 5043 scope.go:117] "RemoveContainer" containerID="d36884295a82377865704102aa82a67dad68911fbfc9b53889367b5c37660321" Nov 25 08:40:06 crc kubenswrapper[5043]: E1125 08:40:06.046969 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-dxd2x_openstack-operators(020c7247-0b68-419b-b97f-f7b0ea800142)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" podUID="020c7247-0b68-419b-b97f-f7b0ea800142" Nov 25 08:40:06 crc kubenswrapper[5043]: I1125 08:40:06.055331 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" Nov 25 08:40:06 crc kubenswrapper[5043]: I1125 08:40:06.056057 5043 scope.go:117] "RemoveContainer" containerID="e5c7582397dfb9c316ef33c3d3ed2942538bce06505f079fae351c74b56cc658" Nov 25 08:40:06 crc kubenswrapper[5043]: E1125 08:40:06.056450 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-6w2db_openstack-operators(869f93a1-d6e7-46ff-a60f-0e997412a2fa)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" podUID="869f93a1-d6e7-46ff-a60f-0e997412a2fa" Nov 25 08:40:06 crc kubenswrapper[5043]: I1125 08:40:06.097241 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" Nov 25 08:40:06 crc kubenswrapper[5043]: I1125 08:40:06.098407 5043 scope.go:117] "RemoveContainer" containerID="315598ad87877ef7a91d370f5eefa3dfb634bd2f70eb65f39487636965222df9" Nov 25 08:40:06 crc kubenswrapper[5043]: E1125 08:40:06.098791 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-gmcsx_openstack-operators(8ea2c827-762f-437d-ad30-a3568d7a4af1)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" podUID="8ea2c827-762f-437d-ad30-a3568d7a4af1" Nov 25 08:40:06 crc kubenswrapper[5043]: I1125 08:40:06.123629 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" Nov 25 08:40:06 crc kubenswrapper[5043]: I1125 08:40:06.124456 5043 scope.go:117] "RemoveContainer" containerID="ceee2427bb2d7a063bcdb56c9ca4e152d5ac06d7e8ed4c750b2a0673d719cf67" Nov 25 08:40:06 crc kubenswrapper[5043]: E1125 08:40:06.124781 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-567f98c9d-mk7wm_openstack-operators(d643e47d-246d-4551-a63c-9b9374e684b2)\"" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" podUID="d643e47d-246d-4551-a63c-9b9374e684b2" Nov 25 08:40:06 crc kubenswrapper[5043]: I1125 08:40:06.172894 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" Nov 25 08:40:06 crc kubenswrapper[5043]: I1125 08:40:06.173739 5043 scope.go:117] "RemoveContainer" containerID="ff672824b0306addcc741e64411cd7b2199ee4541c1862397b479d2337a4a7eb" Nov 25 08:40:06 crc kubenswrapper[5043]: E1125 08:40:06.174014 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-54g5x_openstack-operators(17e00d26-c8ad-4dfd-90df-8705b2cb2bde)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" podUID="17e00d26-c8ad-4dfd-90df-8705b2cb2bde" Nov 25 08:40:06 crc kubenswrapper[5043]: I1125 08:40:06.194401 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" Nov 25 08:40:06 crc kubenswrapper[5043]: I1125 08:40:06.195258 5043 scope.go:117] "RemoveContainer" containerID="b623b8f8e70abb46f80a58bb0910332852f3dcd2ed61a065369be4b1d02fd550" Nov 25 08:40:06 crc kubenswrapper[5043]: E1125 08:40:06.195598 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-d5ffq_openstack-operators(d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" podUID="d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2" Nov 25 08:40:06 crc kubenswrapper[5043]: I1125 08:40:06.386966 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" Nov 25 08:40:06 crc kubenswrapper[5043]: I1125 08:40:06.388230 5043 scope.go:117] "RemoveContainer" containerID="2b6615190db3121e68c7a7cd0e99fed23a8bf05b101ec606c86a00e77c75c8a9" Nov 25 08:40:06 crc kubenswrapper[5043]: E1125 08:40:06.388748 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-d5cc86f4b-x8q8x_openstack-operators(92e57762-522f-4a9d-8b03-732ba4dad5c1)\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" podUID="92e57762-522f-4a9d-8b03-732ba4dad5c1" Nov 25 08:40:07 crc kubenswrapper[5043]: I1125 08:40:07.064817 5043 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2bb7b8fa-bbd2-413c-b4cb-7aa604d79222" Nov 25 08:40:07 crc kubenswrapper[5043]: I1125 08:40:07.485524 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 08:40:07 crc kubenswrapper[5043]: I1125 08:40:07.797997 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 08:40:07 crc kubenswrapper[5043]: I1125 08:40:07.810327 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 08:40:08 crc kubenswrapper[5043]: I1125 08:40:08.104086 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 08:40:08 crc kubenswrapper[5043]: I1125 08:40:08.137023 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 25 08:40:08 crc kubenswrapper[5043]: I1125 08:40:08.278676 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 08:40:08 crc kubenswrapper[5043]: I1125 08:40:08.832671 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 08:40:08 crc kubenswrapper[5043]: I1125 08:40:08.857962 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.096250 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-dp6xt" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.104044 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.118288 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.175304 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.353708 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.375865 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-fxpzj" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.379755 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.381017 5043 scope.go:117] "RemoveContainer" containerID="83d85ea001b26054663afb056e081c7222bf59af8cb5e346274ad3959f7fb60c" Nov 25 08:40:09 crc kubenswrapper[5043]: E1125 08:40:09.381412 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(c937dff6-4203-455c-b07a-ec16e23c746f)\"" pod="openstack/kube-state-metrics-0" podUID="c937dff6-4203-455c-b07a-ec16e23c746f" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.390962 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.394907 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.399110 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.517482 5043 scope.go:117] "RemoveContainer" containerID="83d85ea001b26054663afb056e081c7222bf59af8cb5e346274ad3959f7fb60c" Nov 25 08:40:09 crc kubenswrapper[5043]: E1125 08:40:09.517798 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(c937dff6-4203-455c-b07a-ec16e23c746f)\"" pod="openstack/kube-state-metrics-0" podUID="c937dff6-4203-455c-b07a-ec16e23c746f" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.588521 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.633733 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.639418 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-h9jgk" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.736088 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.737200 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.825037 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.896502 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 08:40:09 crc kubenswrapper[5043]: I1125 08:40:09.913436 5043 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-7wqcg" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.007184 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.089878 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.091186 5043 scope.go:117] "RemoveContainer" containerID="7031ab5226215a70b04689e908f34d503377ffe0d9a02bde65e3a43a0b1197fb" Nov 25 08:40:10 crc kubenswrapper[5043]: E1125 08:40:10.091731 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-7cd5954d9-5zklz_openstack-operators(f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4)\"" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" podUID="f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.117330 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.131905 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.142235 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.164309 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.208096 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.219569 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.287834 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.304096 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.405330 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.461254 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.477780 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.510865 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.578924 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.684942 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.739857 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.740450 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.743172 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-kfv9l" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.758172 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.759141 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.802772 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.960463 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rphjt" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.968061 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 08:40:10 crc kubenswrapper[5043]: I1125 08:40:10.974303 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-54c548f75b-mk6ml" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.016087 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.119954 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.124294 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-x6rtb" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.129245 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.209089 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.234132 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-67cq4" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.270962 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.297260 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.363230 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.401270 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.440932 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.523230 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.639035 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.673268 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-bv84z" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.675046 5043 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.675080 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.697723 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.712008 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.723274 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.796789 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.813619 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.845560 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.871980 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.913853 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.916089 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-ckvz7" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.944150 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.981506 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.993191 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-j47rd" Nov 25 08:40:11 crc kubenswrapper[5043]: I1125 08:40:11.993654 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.030078 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.030223 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.034873 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.111372 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.124703 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-mgfgz" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.159562 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.166558 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.196020 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.243540 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.267400 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.284329 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.288332 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-lwhxq" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.353865 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ctkqr" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.385053 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.387580 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-svd2g" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.413284 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.435733 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.517438 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.520878 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.598695 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.612200 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.672949 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.698063 5043 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.729459 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.732052 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.786492 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.825940 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 08:40:12 crc kubenswrapper[5043]: I1125 08:40:12.963731 5043 scope.go:117] "RemoveContainer" containerID="2a5a51a743d574b33137d008b87eb81a93ebcc70a73b7deb50c7a924bfb17bc7" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.033242 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.145456 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.153980 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.173747 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.196168 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.216116 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.238181 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xt29v" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.247890 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-ngscn" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.268692 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.277398 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.284035 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.428048 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.430304 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.434710 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.480533 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.489986 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.552629 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" event={"ID":"6411a018-19de-4fba-bf72-6dfd5bd2ce29","Type":"ContainerStarted","Data":"7235e34acf016c6d683c5a5806c60e1b17f761fa72d6445d8cd9beae51b0a556"} Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.585356 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.647160 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.713878 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.714724 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.721803 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.726807 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.734113 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-b5hzn" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.738486 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.804698 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.854986 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.901352 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.918990 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.959856 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.960291 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 08:40:13 crc kubenswrapper[5043]: I1125 08:40:13.962582 5043 scope.go:117] "RemoveContainer" containerID="a66b9b201d4d0bf19d72cd0aff83a2f14a28a6983ae1fc9474a2bf452a5cfb8b" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.000879 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.006964 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.009592 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.028752 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.034225 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-76vjz" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.057705 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.071354 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.083083 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.098037 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.125617 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.146272 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.212989 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.336804 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.338978 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.374370 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.406758 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.423917 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.460000 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.539830 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.556173 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.559764 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.560144 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.562308 5043 generic.go:334] "Generic (PLEG): container finished" podID="6411a018-19de-4fba-bf72-6dfd5bd2ce29" containerID="7235e34acf016c6d683c5a5806c60e1b17f761fa72d6445d8cd9beae51b0a556" exitCode=1 Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.562385 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" event={"ID":"6411a018-19de-4fba-bf72-6dfd5bd2ce29","Type":"ContainerDied","Data":"7235e34acf016c6d683c5a5806c60e1b17f761fa72d6445d8cd9beae51b0a556"} Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.562440 5043 scope.go:117] "RemoveContainer" containerID="2a5a51a743d574b33137d008b87eb81a93ebcc70a73b7deb50c7a924bfb17bc7" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.563424 5043 scope.go:117] "RemoveContainer" containerID="7235e34acf016c6d683c5a5806c60e1b17f761fa72d6445d8cd9beae51b0a556" Nov 25 08:40:14 crc kubenswrapper[5043]: E1125 08:40:14.563827 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-fmplr_openstack-operators(6411a018-19de-4fba-bf72-6dfd5bd2ce29)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" podUID="6411a018-19de-4fba-bf72-6dfd5bd2ce29" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.564901 5043 generic.go:334] "Generic (PLEG): container finished" podID="cdbab2e0-494c-4845-a500-88b26934f1c7" containerID="c9e7a68c6765d6d9e35a2010b6a2775845e5797cffe2e5001f9db756e6a00f50" exitCode=1 Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.564967 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" event={"ID":"cdbab2e0-494c-4845-a500-88b26934f1c7","Type":"ContainerDied","Data":"c9e7a68c6765d6d9e35a2010b6a2775845e5797cffe2e5001f9db756e6a00f50"} Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.566000 5043 scope.go:117] "RemoveContainer" containerID="c9e7a68c6765d6d9e35a2010b6a2775845e5797cffe2e5001f9db756e6a00f50" Nov 25 08:40:14 crc kubenswrapper[5043]: E1125 08:40:14.566239 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-85bdd6cc97-lrkkr_metallb-system(cdbab2e0-494c-4845-a500-88b26934f1c7)\"" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" podUID="cdbab2e0-494c-4845-a500-88b26934f1c7" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.635687 5043 scope.go:117] "RemoveContainer" containerID="a66b9b201d4d0bf19d72cd0aff83a2f14a28a6983ae1fc9474a2bf452a5cfb8b" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.686926 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.708354 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.730998 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.790398 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.808895 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.811497 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.863763 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.875502 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.900062 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.922578 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.947145 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-2qvs2" Nov 25 08:40:14 crc kubenswrapper[5043]: I1125 08:40:14.968995 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.030975 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.055419 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.076141 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.125882 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.134523 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.158836 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.192028 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.193536 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.206580 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-4n5v4" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.217320 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.223623 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.262009 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.276457 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.328072 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.360597 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.438666 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.466564 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.574312 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.575354 5043 scope.go:117] "RemoveContainer" containerID="088d46c362950d22f0bb9a8bf2ec55618b2fa18486bccd11bc3d6a186cfb0d39" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.590918 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.592094 5043 scope.go:117] "RemoveContainer" containerID="418d63b3070820e3106cc63d9452c1c39a312d438143ef40226f1d5e0aa87622" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.593061 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.632932 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.632980 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.633847 5043 scope.go:117] "RemoveContainer" containerID="5b4464aa4cfca702fc03e8ece643f773f179357653d294f57ff570a3d1baa3c9" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.655964 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.656878 5043 scope.go:117] "RemoveContainer" containerID="51e626a6f95525624307e401f5fc76fbf6aaae3a2a28eefce2607b2d7b70b1a7" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.687471 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.688392 5043 scope.go:117] "RemoveContainer" containerID="2b1094a6d63348ca08ff2872cb76f6b03b0723cd906b7b376062d41f51dc2e38" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.691951 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.709856 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.710676 5043 scope.go:117] "RemoveContainer" containerID="219b30f2d353e7768b4f2759650e5612797d882748b45af8eee5d449011683a6" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.773400 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.775308 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.776054 5043 scope.go:117] "RemoveContainer" containerID="2b8d31e717a217a52b718a38460cbc5c42b88e6a532e010c35a89516f1253629" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.777056 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.799300 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.800299 5043 scope.go:117] "RemoveContainer" containerID="e90f40567bb829ac55e91c9fe5e7c8a0910c4679ffb311eba4f2318a447ffb4b" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.809468 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.810334 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.832090 5043 scope.go:117] "RemoveContainer" containerID="20df5d09f6e301696a267ef62018a6daaa66457438569ca0c78de2b61d698b03" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.857854 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.862201 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.894128 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.894890 5043 scope.go:117] "RemoveContainer" containerID="5be4d2c65ed07d83395c8339e2244c63ee1c304a151f28753806b4ca4254ac16" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.899278 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.922286 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.922300 5043 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-glst7" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.951274 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.965773 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.986249 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" Nov 25 08:40:15 crc kubenswrapper[5043]: I1125 08:40:15.986931 5043 scope.go:117] "RemoveContainer" containerID="9c2bf3be8a679d4e661c93f2c49960b5e833ad34e77447afe5a38d7f06c93626" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.005955 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.006737 5043 scope.go:117] "RemoveContainer" containerID="5b1834faa1a607d3cf6c7ed86ce7ad541aac032a297a358bfa7c92e796635fd3" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.010677 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-2ggpd" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.011266 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.041863 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-wfqxk" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.046172 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.047097 5043 scope.go:117] "RemoveContainer" containerID="d36884295a82377865704102aa82a67dad68911fbfc9b53889367b5c37660321" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.051512 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.054122 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.054659 5043 scope.go:117] "RemoveContainer" containerID="e5c7582397dfb9c316ef33c3d3ed2942538bce06505f079fae351c74b56cc658" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.066496 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.097433 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.098321 5043 scope.go:117] "RemoveContainer" containerID="315598ad87877ef7a91d370f5eefa3dfb634bd2f70eb65f39487636965222df9" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.118242 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-5p6f4" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.123415 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.124535 5043 scope.go:117] "RemoveContainer" containerID="ceee2427bb2d7a063bcdb56c9ca4e152d5ac06d7e8ed4c750b2a0673d719cf67" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.174236 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.174977 5043 scope.go:117] "RemoveContainer" containerID="ff672824b0306addcc741e64411cd7b2199ee4541c1862397b479d2337a4a7eb" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.194105 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.194870 5043 scope.go:117] "RemoveContainer" containerID="b623b8f8e70abb46f80a58bb0910332852f3dcd2ed61a065369be4b1d02fd550" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.201655 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.239680 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.274028 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.289924 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.330561 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5htmq" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.349690 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.378431 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.386740 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.387556 5043 scope.go:117] "RemoveContainer" containerID="2b6615190db3121e68c7a7cd0e99fed23a8bf05b101ec606c86a00e77c75c8a9" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.432780 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.446917 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.473470 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.561735 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.588808 5043 generic.go:334] "Generic (PLEG): container finished" podID="e5c62587-28b4-4a1e-8b73-ee9624ca7163" containerID="57960d4a1ce355a1267f014aa4b1d651e4a2e59d4c89930ddeefb026e11d198d" exitCode=1 Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.588863 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" event={"ID":"e5c62587-28b4-4a1e-8b73-ee9624ca7163","Type":"ContainerDied","Data":"57960d4a1ce355a1267f014aa4b1d651e4a2e59d4c89930ddeefb026e11d198d"} Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.588894 5043 scope.go:117] "RemoveContainer" containerID="51e626a6f95525624307e401f5fc76fbf6aaae3a2a28eefce2607b2d7b70b1a7" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.589595 5043 scope.go:117] "RemoveContainer" containerID="57960d4a1ce355a1267f014aa4b1d651e4a2e59d4c89930ddeefb026e11d198d" Nov 25 08:40:16 crc kubenswrapper[5043]: E1125 08:40:16.590073 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-nnpzz_openstack-operators(e5c62587-28b4-4a1e-8b73-ee9624ca7163)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" podUID="e5c62587-28b4-4a1e-8b73-ee9624ca7163" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.592847 5043 generic.go:334] "Generic (PLEG): container finished" podID="ff874d31-8e5a-4c0b-8f9c-e63513a00483" containerID="667784fc2ebb65a015db42adb0de7043f594062fbe029db178e186a660be9ff4" exitCode=1 Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.593160 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" event={"ID":"ff874d31-8e5a-4c0b-8f9c-e63513a00483","Type":"ContainerDied","Data":"667784fc2ebb65a015db42adb0de7043f594062fbe029db178e186a660be9ff4"} Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.593584 5043 scope.go:117] "RemoveContainer" containerID="667784fc2ebb65a015db42adb0de7043f594062fbe029db178e186a660be9ff4" Nov 25 08:40:16 crc kubenswrapper[5043]: E1125 08:40:16.593834 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-gvwj8_openstack-operators(ff874d31-8e5a-4c0b-8f9c-e63513a00483)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" podUID="ff874d31-8e5a-4c0b-8f9c-e63513a00483" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.596281 5043 generic.go:334] "Generic (PLEG): container finished" podID="d9a368e6-f4bb-4896-9a2d-f7ceed65e933" containerID="bdb456d5ed1e0052a434ec399ada73c55aff2d468a2de20332c2b20810ea6454" exitCode=1 Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.596299 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" event={"ID":"d9a368e6-f4bb-4896-9a2d-f7ceed65e933","Type":"ContainerDied","Data":"bdb456d5ed1e0052a434ec399ada73c55aff2d468a2de20332c2b20810ea6454"} Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.596643 5043 scope.go:117] "RemoveContainer" containerID="bdb456d5ed1e0052a434ec399ada73c55aff2d468a2de20332c2b20810ea6454" Nov 25 08:40:16 crc kubenswrapper[5043]: E1125 08:40:16.596870 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-dtcj4_openstack-operators(d9a368e6-f4bb-4896-9a2d-f7ceed65e933)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" podUID="d9a368e6-f4bb-4896-9a2d-f7ceed65e933" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.600872 5043 generic.go:334] "Generic (PLEG): container finished" podID="e020a857-3730-44f5-8e98-3e59868fbde6" containerID="e0911531dc203ed99d2439dde2e6250a2c2a80bc2438482addb38f79a429e302" exitCode=1 Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.600955 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" event={"ID":"e020a857-3730-44f5-8e98-3e59868fbde6","Type":"ContainerDied","Data":"e0911531dc203ed99d2439dde2e6250a2c2a80bc2438482addb38f79a429e302"} Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.601379 5043 scope.go:117] "RemoveContainer" containerID="e0911531dc203ed99d2439dde2e6250a2c2a80bc2438482addb38f79a429e302" Nov 25 08:40:16 crc kubenswrapper[5043]: E1125 08:40:16.601641 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-5mp5h_openstack-operators(e020a857-3730-44f5-8e98-3e59868fbde6)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" podUID="e020a857-3730-44f5-8e98-3e59868fbde6" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.603907 5043 generic.go:334] "Generic (PLEG): container finished" podID="8a93d5b1-742c-4a37-94ef-a60ffb008520" containerID="d8436e4d166100ec637a616180ee63f0bc71223447501e942c26b8c14fa7148e" exitCode=1 Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.603980 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" event={"ID":"8a93d5b1-742c-4a37-94ef-a60ffb008520","Type":"ContainerDied","Data":"d8436e4d166100ec637a616180ee63f0bc71223447501e942c26b8c14fa7148e"} Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.604523 5043 scope.go:117] "RemoveContainer" containerID="d8436e4d166100ec637a616180ee63f0bc71223447501e942c26b8c14fa7148e" Nov 25 08:40:16 crc kubenswrapper[5043]: E1125 08:40:16.604834 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-l77gb_openstack-operators(8a93d5b1-742c-4a37-94ef-a60ffb008520)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" podUID="8a93d5b1-742c-4a37-94ef-a60ffb008520" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.610411 5043 generic.go:334] "Generic (PLEG): container finished" podID="c20803a7-e9a9-441a-9e61-84673f3c02e8" containerID="8897b33013bb79072d9ea8749d059b049dbbac21bf0f1bbf145e3295ad4fb8b4" exitCode=1 Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.610448 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" event={"ID":"c20803a7-e9a9-441a-9e61-84673f3c02e8","Type":"ContainerDied","Data":"8897b33013bb79072d9ea8749d059b049dbbac21bf0f1bbf145e3295ad4fb8b4"} Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.611210 5043 scope.go:117] "RemoveContainer" containerID="8897b33013bb79072d9ea8749d059b049dbbac21bf0f1bbf145e3295ad4fb8b4" Nov 25 08:40:16 crc kubenswrapper[5043]: E1125 08:40:16.611435 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-wmkmw_openstack-operators(c20803a7-e9a9-441a-9e61-84673f3c02e8)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" podUID="c20803a7-e9a9-441a-9e61-84673f3c02e8" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.613118 5043 generic.go:334] "Generic (PLEG): container finished" podID="b7005e58-64d2-470b-a3e7-22b67b7fbfb3" containerID="95a4900d5af42dd9fe7c1617ad1964399420f63d1dde275ef7536d35340e4e16" exitCode=1 Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.613173 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" event={"ID":"b7005e58-64d2-470b-a3e7-22b67b7fbfb3","Type":"ContainerDied","Data":"95a4900d5af42dd9fe7c1617ad1964399420f63d1dde275ef7536d35340e4e16"} Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.613483 5043 scope.go:117] "RemoveContainer" containerID="95a4900d5af42dd9fe7c1617ad1964399420f63d1dde275ef7536d35340e4e16" Nov 25 08:40:16 crc kubenswrapper[5043]: E1125 08:40:16.613710 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-sgz96_openstack-operators(b7005e58-64d2-470b-a3e7-22b67b7fbfb3)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" podUID="b7005e58-64d2-470b-a3e7-22b67b7fbfb3" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.616286 5043 generic.go:334] "Generic (PLEG): container finished" podID="cdc9a1bf-b6d9-4a36-bcf8-55f87525da45" containerID="c02d906771b3894c0dfa1607907de4c043be1f794af90b0e41fa19b0fed7a608" exitCode=1 Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.616336 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" event={"ID":"cdc9a1bf-b6d9-4a36-bcf8-55f87525da45","Type":"ContainerDied","Data":"c02d906771b3894c0dfa1607907de4c043be1f794af90b0e41fa19b0fed7a608"} Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.616722 5043 scope.go:117] "RemoveContainer" containerID="c02d906771b3894c0dfa1607907de4c043be1f794af90b0e41fa19b0fed7a608" Nov 25 08:40:16 crc kubenswrapper[5043]: E1125 08:40:16.616932 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-pnq4k_openstack-operators(cdc9a1bf-b6d9-4a36-bcf8-55f87525da45)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" podUID="cdc9a1bf-b6d9-4a36-bcf8-55f87525da45" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.619276 5043 generic.go:334] "Generic (PLEG): container finished" podID="c924fa47-53fb-4edc-8214-667ba1858ca2" containerID="c14df4fe99b085e2fc3303f85ec50867148f9864f75a443bf051a54ec4846b48" exitCode=1 Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.619320 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" event={"ID":"c924fa47-53fb-4edc-8214-667ba1858ca2","Type":"ContainerDied","Data":"c14df4fe99b085e2fc3303f85ec50867148f9864f75a443bf051a54ec4846b48"} Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.619672 5043 scope.go:117] "RemoveContainer" containerID="c14df4fe99b085e2fc3303f85ec50867148f9864f75a443bf051a54ec4846b48" Nov 25 08:40:16 crc kubenswrapper[5043]: E1125 08:40:16.619874 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-xx8rb_openstack-operators(c924fa47-53fb-4edc-8214-667ba1858ca2)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" podUID="c924fa47-53fb-4edc-8214-667ba1858ca2" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.636543 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.636762 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.686708 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rkxqg" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.725263 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.725988 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.780267 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.781542 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-l87tx" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.798660 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.827504 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.846571 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.849247 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.877086 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.880402 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-8bdmn" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.913093 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.945148 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.956317 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 08:40:16 crc kubenswrapper[5043]: I1125 08:40:16.987715 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.007589 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.048312 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.102382 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-ghk4c" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.102563 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.106151 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-89lz4" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.108311 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.113818 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.127142 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.127711 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.164465 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.167066 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.170216 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.210032 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.246083 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-hqx5n" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.246362 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.284315 5043 scope.go:117] "RemoveContainer" containerID="e90f40567bb829ac55e91c9fe5e7c8a0910c4679ffb311eba4f2318a447ffb4b" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.393716 5043 scope.go:117] "RemoveContainer" containerID="088d46c362950d22f0bb9a8bf2ec55618b2fa18486bccd11bc3d6a186cfb0d39" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.401379 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.404105 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.464354 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.479996 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.493978 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.538827 5043 scope.go:117] "RemoveContainer" containerID="5b4464aa4cfca702fc03e8ece643f773f179357653d294f57ff570a3d1baa3c9" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.544451 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.563736 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.600666 5043 scope.go:117] "RemoveContainer" containerID="2b1094a6d63348ca08ff2872cb76f6b03b0723cd906b7b376062d41f51dc2e38" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.625925 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.633010 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.633156 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-cglcl" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.633253 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" event={"ID":"020c7247-0b68-419b-b97f-f7b0ea800142","Type":"ContainerStarted","Data":"6eb102e9427f9056dbaaae1a28c7045d53672c2e6cb008092b6ced8ddf7d557f"} Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.633435 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.639795 5043 scope.go:117] "RemoveContainer" containerID="219b30f2d353e7768b4f2759650e5612797d882748b45af8eee5d449011683a6" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.641717 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.649526 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.650721 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" event={"ID":"a3d7b5dc-2ced-4ac6-bdad-cd86342616a8","Type":"ContainerStarted","Data":"b37e08a03c7fb0bca5fca3e1b6a9382e9fc6bc7ab43198f10cf345bb698b7ae8"} Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.650936 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.653639 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" event={"ID":"17e00d26-c8ad-4dfd-90df-8705b2cb2bde","Type":"ContainerStarted","Data":"e1e68b4eb03614db28bf7a038614e66c41938d68af12cf7e76f5685172b5bf70"} Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.653895 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.657021 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" event={"ID":"d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2","Type":"ContainerStarted","Data":"e2570e4f7a35aa9067b088cb1194bf5ef9a315c88a3327687d0918ae42fda6d5"} Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.657247 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.669905 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" event={"ID":"869f93a1-d6e7-46ff-a60f-0e997412a2fa","Type":"ContainerStarted","Data":"30048c74560f77c34790ac033e546bbc5ba83ffc9f5ce89c504aa760faff2ea7"} Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.670113 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.679132 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" event={"ID":"bb800a2f-1864-47be-931b-7b99f7c7354f","Type":"ContainerStarted","Data":"b6cb606a7f053d7bc2dcb9acf99c78730365c903e574565f52968b6cbdb48039"} Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.679378 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.682481 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" event={"ID":"d643e47d-246d-4551-a63c-9b9374e684b2","Type":"ContainerStarted","Data":"3919b538f4412622f2d30b10d35732f7cea3640e10a10a37b9e1ee6e32d70cbd"} Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.682759 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.690778 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" event={"ID":"9c9e4471-0205-478a-8717-be36a19d2a02","Type":"ContainerStarted","Data":"a84878570ba1df7821df0d88f8c6b1d7be213d0724071ea067f9d3e2d16e0ef9"} Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.691112 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.697693 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" event={"ID":"92e57762-522f-4a9d-8b03-732ba4dad5c1","Type":"ContainerStarted","Data":"ee2990038f84480bafa28f5ebb257898bea27d3d65f3c56c35b2a45bd2ce969b"} Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.697979 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.703801 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" event={"ID":"8ea2c827-762f-437d-ad30-a3568d7a4af1","Type":"ContainerStarted","Data":"0500f663c52debf8ab104aee4c927f8e0fa042ba7a3fa318247c83b3c9f4daba"} Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.704250 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.704492 5043 scope.go:117] "RemoveContainer" containerID="c14df4fe99b085e2fc3303f85ec50867148f9864f75a443bf051a54ec4846b48" Nov 25 08:40:17 crc kubenswrapper[5043]: E1125 08:40:17.704775 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-xx8rb_openstack-operators(c924fa47-53fb-4edc-8214-667ba1858ca2)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" podUID="c924fa47-53fb-4edc-8214-667ba1858ca2" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.722480 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.725436 5043 scope.go:117] "RemoveContainer" containerID="2b8d31e717a217a52b718a38460cbc5c42b88e6a532e010c35a89516f1253629" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.757210 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.793109 5043 scope.go:117] "RemoveContainer" containerID="418d63b3070820e3106cc63d9452c1c39a312d438143ef40226f1d5e0aa87622" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.795827 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.806827 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.871566 5043 scope.go:117] "RemoveContainer" containerID="20df5d09f6e301696a267ef62018a6daaa66457438569ca0c78de2b61d698b03" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.881409 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.960385 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.962885 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:40:17 crc kubenswrapper[5043]: E1125 08:40:17.963211 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:40:17 crc kubenswrapper[5043]: I1125 08:40:17.981844 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-xvhqf" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.007849 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.044485 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.086392 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.162764 5043 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-pjjr7" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.260166 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.276266 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.318503 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.318989 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-lvw85" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.372862 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.386910 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.430478 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.444796 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.524475 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.547588 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.673120 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.674550 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.684005 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.692628 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.700016 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.722342 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.798472 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.814415 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-tgg77" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.828566 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.858336 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-kxbhn" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.911968 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.926732 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-vd46s" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.948973 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 08:40:18 crc kubenswrapper[5043]: I1125 08:40:18.959878 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.008048 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.029816 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-86tg6" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.075286 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.076629 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.077759 5043 scope.go:117] "RemoveContainer" containerID="c9e7a68c6765d6d9e35a2010b6a2775845e5797cffe2e5001f9db756e6a00f50" Nov 25 08:40:19 crc kubenswrapper[5043]: E1125 08:40:19.077990 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-85bdd6cc97-lrkkr_metallb-system(cdbab2e0-494c-4845-a500-88b26934f1c7)\"" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" podUID="cdbab2e0-494c-4845-a500-88b26934f1c7" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.079212 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.155536 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-wq2z6" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.175506 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.183868 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.184526 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.196142 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.217235 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.251674 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.275967 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.363226 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.371913 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.436578 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.462861 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7tlqz" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.483407 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.541937 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.557331 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.561085 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.674552 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.679508 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.683032 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.729948 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.739144 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.739354 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.746763 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.760901 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.774985 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.782595 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.804754 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.859363 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.886719 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-6bt77" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.891653 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-qkf5c" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.943974 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-xhrcb" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.948408 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 25 08:40:19 crc kubenswrapper[5043]: I1125 08:40:19.980305 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.090223 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.091004 5043 scope.go:117] "RemoveContainer" containerID="7031ab5226215a70b04689e908f34d503377ffe0d9a02bde65e3a43a0b1197fb" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.109558 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.173174 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.174179 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-x47lt" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.191852 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.212007 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.263580 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.271213 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.277434 5043 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.294149 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.333853 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.362387 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.418734 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.435247 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.436058 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.444110 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.452829 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.456032 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-sn6cm" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.458764 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.459373 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.496838 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.508267 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.515461 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.553511 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.572866 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-2tk7t" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.616597 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.658404 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.678978 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.706730 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.724219 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.727542 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.752741 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.756662 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-zqjvj" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.761389 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" event={"ID":"f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4","Type":"ContainerStarted","Data":"1c65b41b0a475b9bb15c65974d9c2104b0e44e106b2d9a98b3ea51744a5f560c"} Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.761628 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.777697 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.808058 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.818178 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.819801 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.823272 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.823862 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-47p9v" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.838646 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.856434 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.900189 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.972333 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 25 08:40:20 crc kubenswrapper[5043]: I1125 08:40:20.999376 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-2qjzv" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.001591 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.210004 5043 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.210491 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.266177 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.270778 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.324161 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.331955 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.455661 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.458594 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.464900 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.544547 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.588928 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.602554 5043 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.607048 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.613989 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=45.613968811 podStartE2EDuration="45.613968811s" podCreationTimestamp="2025-11-25 08:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:39:57.702297605 +0000 UTC m=+5061.870493336" watchObservedRunningTime="2025-11-25 08:40:21.613968811 +0000 UTC m=+5085.782164532" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.615359 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.615399 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.615709 5043 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26bc8613-79e6-42d4-b2ae-fe1c78a750fe" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.615822 5043 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="26bc8613-79e6-42d4-b2ae-fe1c78a750fe" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.621434 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.648711 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.648690058 podStartE2EDuration="24.648690058s" podCreationTimestamp="2025-11-25 08:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 08:40:21.637533297 +0000 UTC m=+5085.805729028" watchObservedRunningTime="2025-11-25 08:40:21.648690058 +0000 UTC m=+5085.816885799" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.712564 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.751798 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-5xhsx" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.856692 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.925213 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.966645 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 25 08:40:21 crc kubenswrapper[5043]: I1125 08:40:21.973569 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.012146 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.030312 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.040478 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.094287 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.107815 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.111452 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.231986 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.262983 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-k7ztd" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.384074 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.403991 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.447185 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.501143 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.529921 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.536146 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2hkbq" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.621777 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.661267 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-smg7n" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.891589 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.899289 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.962956 5043 scope.go:117] "RemoveContainer" containerID="83d85ea001b26054663afb056e081c7222bf59af8cb5e346274ad3959f7fb60c" Nov 25 08:40:22 crc kubenswrapper[5043]: I1125 08:40:22.985330 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.035026 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.355191 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.369004 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.467511 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.493094 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.619672 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vvnvv" Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.632255 5043 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.668403 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.681520 5043 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.708004 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.790253 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c937dff6-4203-455c-b07a-ec16e23c746f","Type":"ContainerStarted","Data":"2acb69ed09be3ea5584a74e8f9b94225c47161d9b829d7a2a95eaf4ecc28b186"} Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.790470 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.821213 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.859297 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-2ppmm" Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.878485 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.946876 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.979899 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.980093 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 08:40:23 crc kubenswrapper[5043]: I1125 08:40:23.995646 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 25 08:40:24 crc kubenswrapper[5043]: I1125 08:40:24.013883 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 08:40:24 crc kubenswrapper[5043]: I1125 08:40:24.082157 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 08:40:24 crc kubenswrapper[5043]: I1125 08:40:24.173150 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-vvvtn" Nov 25 08:40:24 crc kubenswrapper[5043]: I1125 08:40:24.180447 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 08:40:24 crc kubenswrapper[5043]: I1125 08:40:24.285089 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 08:40:24 crc kubenswrapper[5043]: I1125 08:40:24.289662 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 25 08:40:24 crc kubenswrapper[5043]: I1125 08:40:24.469831 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 25 08:40:24 crc kubenswrapper[5043]: I1125 08:40:24.972490 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.573896 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.574703 5043 scope.go:117] "RemoveContainer" containerID="bdb456d5ed1e0052a434ec399ada73c55aff2d468a2de20332c2b20810ea6454" Nov 25 08:40:25 crc kubenswrapper[5043]: E1125 08:40:25.575091 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-dtcj4_openstack-operators(d9a368e6-f4bb-4896-9a2d-f7ceed65e933)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" podUID="d9a368e6-f4bb-4896-9a2d-f7ceed65e933" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.590802 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.591696 5043 scope.go:117] "RemoveContainer" containerID="c02d906771b3894c0dfa1607907de4c043be1f794af90b0e41fa19b0fed7a608" Nov 25 08:40:25 crc kubenswrapper[5043]: E1125 08:40:25.591972 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-pnq4k_openstack-operators(cdc9a1bf-b6d9-4a36-bcf8-55f87525da45)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" podUID="cdc9a1bf-b6d9-4a36-bcf8-55f87525da45" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.633773 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.634491 5043 scope.go:117] "RemoveContainer" containerID="e0911531dc203ed99d2439dde2e6250a2c2a80bc2438482addb38f79a429e302" Nov 25 08:40:25 crc kubenswrapper[5043]: E1125 08:40:25.634736 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-5mp5h_openstack-operators(e020a857-3730-44f5-8e98-3e59868fbde6)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" podUID="e020a857-3730-44f5-8e98-3e59868fbde6" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.655566 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.656309 5043 scope.go:117] "RemoveContainer" containerID="57960d4a1ce355a1267f014aa4b1d651e4a2e59d4c89930ddeefb026e11d198d" Nov 25 08:40:25 crc kubenswrapper[5043]: E1125 08:40:25.656562 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-nnpzz_openstack-operators(e5c62587-28b4-4a1e-8b73-ee9624ca7163)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" podUID="e5c62587-28b4-4a1e-8b73-ee9624ca7163" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.686704 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.687756 5043 scope.go:117] "RemoveContainer" containerID="d8436e4d166100ec637a616180ee63f0bc71223447501e942c26b8c14fa7148e" Nov 25 08:40:25 crc kubenswrapper[5043]: E1125 08:40:25.688053 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-l77gb_openstack-operators(8a93d5b1-742c-4a37-94ef-a60ffb008520)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" podUID="8a93d5b1-742c-4a37-94ef-a60ffb008520" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.694724 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.710792 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.712616 5043 scope.go:117] "RemoveContainer" containerID="8897b33013bb79072d9ea8749d059b049dbbac21bf0f1bbf145e3295ad4fb8b4" Nov 25 08:40:25 crc kubenswrapper[5043]: E1125 08:40:25.712914 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-wmkmw_openstack-operators(c20803a7-e9a9-441a-9e61-84673f3c02e8)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" podUID="c20803a7-e9a9-441a-9e61-84673f3c02e8" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.738873 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.773563 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.774460 5043 scope.go:117] "RemoveContainer" containerID="95a4900d5af42dd9fe7c1617ad1964399420f63d1dde275ef7536d35340e4e16" Nov 25 08:40:25 crc kubenswrapper[5043]: E1125 08:40:25.774759 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-sgz96_openstack-operators(b7005e58-64d2-470b-a3e7-22b67b7fbfb3)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" podUID="b7005e58-64d2-470b-a3e7-22b67b7fbfb3" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.800176 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.802480 5043 scope.go:117] "RemoveContainer" containerID="667784fc2ebb65a015db42adb0de7043f594062fbe029db178e186a660be9ff4" Nov 25 08:40:25 crc kubenswrapper[5043]: E1125 08:40:25.802819 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-gvwj8_openstack-operators(ff874d31-8e5a-4c0b-8f9c-e63513a00483)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" podUID="ff874d31-8e5a-4c0b-8f9c-e63513a00483" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.809750 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.810487 5043 scope.go:117] "RemoveContainer" containerID="c14df4fe99b085e2fc3303f85ec50867148f9864f75a443bf051a54ec4846b48" Nov 25 08:40:25 crc kubenswrapper[5043]: E1125 08:40:25.810730 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-xx8rb_openstack-operators(c924fa47-53fb-4edc-8214-667ba1858ca2)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" podUID="c924fa47-53fb-4edc-8214-667ba1858ca2" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.896446 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tdzr2" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.949238 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 08:40:25 crc kubenswrapper[5043]: I1125 08:40:25.988012 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-l5vz2" Nov 25 08:40:26 crc kubenswrapper[5043]: I1125 08:40:26.008697 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-m9bmz" Nov 25 08:40:26 crc kubenswrapper[5043]: I1125 08:40:26.052087 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-dxd2x" Nov 25 08:40:26 crc kubenswrapper[5043]: I1125 08:40:26.067308 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-6w2db" Nov 25 08:40:26 crc kubenswrapper[5043]: I1125 08:40:26.109615 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-gmcsx" Nov 25 08:40:26 crc kubenswrapper[5043]: I1125 08:40:26.135796 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-mk7wm" Nov 25 08:40:26 crc kubenswrapper[5043]: I1125 08:40:26.174887 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-54g5x" Nov 25 08:40:26 crc kubenswrapper[5043]: I1125 08:40:26.201013 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-d5ffq" Nov 25 08:40:26 crc kubenswrapper[5043]: I1125 08:40:26.361374 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 08:40:26 crc kubenswrapper[5043]: I1125 08:40:26.392131 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-x8q8x" Nov 25 08:40:26 crc kubenswrapper[5043]: I1125 08:40:26.583731 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 08:40:26 crc kubenswrapper[5043]: I1125 08:40:26.854778 5043 generic.go:334] "Generic (PLEG): container finished" podID="94518a18-995b-490b-8099-917d5e510ad0" containerID="e6deaf803ff496f689fd7d56d692aacd97607b753ff10de2c9d5e526424c2900" exitCode=1 Nov 25 08:40:26 crc kubenswrapper[5043]: I1125 08:40:26.855643 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-9lxcm" event={"ID":"94518a18-995b-490b-8099-917d5e510ad0","Type":"ContainerDied","Data":"e6deaf803ff496f689fd7d56d692aacd97607b753ff10de2c9d5e526424c2900"} Nov 25 08:40:26 crc kubenswrapper[5043]: I1125 08:40:26.856460 5043 scope.go:117] "RemoveContainer" containerID="e6deaf803ff496f689fd7d56d692aacd97607b753ff10de2c9d5e526424c2900" Nov 25 08:40:27 crc kubenswrapper[5043]: I1125 08:40:27.866244 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-9lxcm" event={"ID":"94518a18-995b-490b-8099-917d5e510ad0","Type":"ContainerStarted","Data":"3cb0dc9f6e78a77a5cac43a7320d3e8db6232222aa70f3a8e857a622dc0101a8"} Nov 25 08:40:27 crc kubenswrapper[5043]: I1125 08:40:27.964289 5043 scope.go:117] "RemoveContainer" containerID="7235e34acf016c6d683c5a5806c60e1b17f761fa72d6445d8cd9beae51b0a556" Nov 25 08:40:27 crc kubenswrapper[5043]: E1125 08:40:27.964597 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-fmplr_openstack-operators(6411a018-19de-4fba-bf72-6dfd5bd2ce29)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" podUID="6411a018-19de-4fba-bf72-6dfd5bd2ce29" Nov 25 08:40:29 crc kubenswrapper[5043]: I1125 08:40:29.393270 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 08:40:29 crc kubenswrapper[5043]: I1125 08:40:29.962517 5043 scope.go:117] "RemoveContainer" containerID="c9e7a68c6765d6d9e35a2010b6a2775845e5797cffe2e5001f9db756e6a00f50" Nov 25 08:40:29 crc kubenswrapper[5043]: I1125 08:40:29.962895 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:40:29 crc kubenswrapper[5043]: E1125 08:40:29.963214 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-85bdd6cc97-lrkkr_metallb-system(cdbab2e0-494c-4845-a500-88b26934f1c7)\"" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" podUID="cdbab2e0-494c-4845-a500-88b26934f1c7" Nov 25 08:40:29 crc kubenswrapper[5043]: E1125 08:40:29.963218 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:40:30 crc kubenswrapper[5043]: I1125 08:40:30.095523 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-5zklz" Nov 25 08:40:31 crc kubenswrapper[5043]: I1125 08:40:31.691046 5043 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 08:40:31 crc kubenswrapper[5043]: I1125 08:40:31.692703 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://8cdaf8855b52857ca11c3db0b21d1bb7428f1817f4566a29f63234cbe8abb5f8" gracePeriod=5 Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.573769 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.574938 5043 scope.go:117] "RemoveContainer" containerID="bdb456d5ed1e0052a434ec399ada73c55aff2d468a2de20332c2b20810ea6454" Nov 25 08:40:35 crc kubenswrapper[5043]: E1125 08:40:35.575296 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-dtcj4_openstack-operators(d9a368e6-f4bb-4896-9a2d-f7ceed65e933)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" podUID="d9a368e6-f4bb-4896-9a2d-f7ceed65e933" Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.591177 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.592067 5043 scope.go:117] "RemoveContainer" containerID="c02d906771b3894c0dfa1607907de4c043be1f794af90b0e41fa19b0fed7a608" Nov 25 08:40:35 crc kubenswrapper[5043]: E1125 08:40:35.592341 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-pnq4k_openstack-operators(cdc9a1bf-b6d9-4a36-bcf8-55f87525da45)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" podUID="cdc9a1bf-b6d9-4a36-bcf8-55f87525da45" Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.638100 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.639214 5043 scope.go:117] "RemoveContainer" containerID="e0911531dc203ed99d2439dde2e6250a2c2a80bc2438482addb38f79a429e302" Nov 25 08:40:35 crc kubenswrapper[5043]: E1125 08:40:35.639661 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-5mp5h_openstack-operators(e020a857-3730-44f5-8e98-3e59868fbde6)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" podUID="e020a857-3730-44f5-8e98-3e59868fbde6" Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.656096 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.657060 5043 scope.go:117] "RemoveContainer" containerID="57960d4a1ce355a1267f014aa4b1d651e4a2e59d4c89930ddeefb026e11d198d" Nov 25 08:40:35 crc kubenswrapper[5043]: E1125 08:40:35.657476 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-nnpzz_openstack-operators(e5c62587-28b4-4a1e-8b73-ee9624ca7163)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" podUID="e5c62587-28b4-4a1e-8b73-ee9624ca7163" Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.687592 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.688660 5043 scope.go:117] "RemoveContainer" containerID="d8436e4d166100ec637a616180ee63f0bc71223447501e942c26b8c14fa7148e" Nov 25 08:40:35 crc kubenswrapper[5043]: E1125 08:40:35.688980 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-l77gb_openstack-operators(8a93d5b1-742c-4a37-94ef-a60ffb008520)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" podUID="8a93d5b1-742c-4a37-94ef-a60ffb008520" Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.709852 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.710658 5043 scope.go:117] "RemoveContainer" containerID="8897b33013bb79072d9ea8749d059b049dbbac21bf0f1bbf145e3295ad4fb8b4" Nov 25 08:40:35 crc kubenswrapper[5043]: E1125 08:40:35.710956 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-wmkmw_openstack-operators(c20803a7-e9a9-441a-9e61-84673f3c02e8)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" podUID="c20803a7-e9a9-441a-9e61-84673f3c02e8" Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.773201 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.774074 5043 scope.go:117] "RemoveContainer" containerID="95a4900d5af42dd9fe7c1617ad1964399420f63d1dde275ef7536d35340e4e16" Nov 25 08:40:35 crc kubenswrapper[5043]: E1125 08:40:35.774387 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-sgz96_openstack-operators(b7005e58-64d2-470b-a3e7-22b67b7fbfb3)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" podUID="b7005e58-64d2-470b-a3e7-22b67b7fbfb3" Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.799265 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.800236 5043 scope.go:117] "RemoveContainer" containerID="667784fc2ebb65a015db42adb0de7043f594062fbe029db178e186a660be9ff4" Nov 25 08:40:35 crc kubenswrapper[5043]: E1125 08:40:35.800544 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-gvwj8_openstack-operators(ff874d31-8e5a-4c0b-8f9c-e63513a00483)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" podUID="ff874d31-8e5a-4c0b-8f9c-e63513a00483" Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.808954 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" Nov 25 08:40:35 crc kubenswrapper[5043]: I1125 08:40:35.809924 5043 scope.go:117] "RemoveContainer" containerID="c14df4fe99b085e2fc3303f85ec50867148f9864f75a443bf051a54ec4846b48" Nov 25 08:40:35 crc kubenswrapper[5043]: E1125 08:40:35.810237 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-xx8rb_openstack-operators(c924fa47-53fb-4edc-8214-667ba1858ca2)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" podUID="c924fa47-53fb-4edc-8214-667ba1858ca2" Nov 25 08:40:36 crc kubenswrapper[5043]: I1125 08:40:36.947894 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 08:40:36 crc kubenswrapper[5043]: I1125 08:40:36.948396 5043 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="8cdaf8855b52857ca11c3db0b21d1bb7428f1817f4566a29f63234cbe8abb5f8" exitCode=137 Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.324775 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.324849 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.458663 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.458992 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.459106 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.459159 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.459245 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.459997 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.460070 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.460109 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.460894 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.467784 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.562212 5043 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.562546 5043 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.562559 5043 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.562572 5043 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.562585 5043 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.958378 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.958480 5043 scope.go:117] "RemoveContainer" containerID="8cdaf8855b52857ca11c3db0b21d1bb7428f1817f4566a29f63234cbe8abb5f8" Nov 25 08:40:37 crc kubenswrapper[5043]: I1125 08:40:37.958510 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 08:40:38 crc kubenswrapper[5043]: I1125 08:40:38.974377 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 25 08:40:38 crc kubenswrapper[5043]: I1125 08:40:38.975930 5043 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Nov 25 08:40:38 crc kubenswrapper[5043]: I1125 08:40:38.990523 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 08:40:38 crc kubenswrapper[5043]: I1125 08:40:38.990570 5043 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d8bb8c11-bd91-4a81-bb53-f67518606731" Nov 25 08:40:39 crc kubenswrapper[5043]: I1125 08:40:39.000576 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 08:40:39 crc kubenswrapper[5043]: I1125 08:40:39.000644 5043 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d8bb8c11-bd91-4a81-bb53-f67518606731" Nov 25 08:40:39 crc kubenswrapper[5043]: I1125 08:40:39.963511 5043 scope.go:117] "RemoveContainer" containerID="7235e34acf016c6d683c5a5806c60e1b17f761fa72d6445d8cd9beae51b0a556" Nov 25 08:40:40 crc kubenswrapper[5043]: I1125 08:40:40.994888 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fmplr" event={"ID":"6411a018-19de-4fba-bf72-6dfd5bd2ce29","Type":"ContainerStarted","Data":"e20db5834c4fecb431349c46d85617beea8674e1f11fbec015e759bf13cced27"} Nov 25 08:40:43 crc kubenswrapper[5043]: I1125 08:40:43.963468 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:40:43 crc kubenswrapper[5043]: E1125 08:40:43.964349 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:40:44 crc kubenswrapper[5043]: I1125 08:40:44.962357 5043 scope.go:117] "RemoveContainer" containerID="c9e7a68c6765d6d9e35a2010b6a2775845e5797cffe2e5001f9db756e6a00f50" Nov 25 08:40:45 crc kubenswrapper[5043]: I1125 08:40:45.963239 5043 scope.go:117] "RemoveContainer" containerID="c02d906771b3894c0dfa1607907de4c043be1f794af90b0e41fa19b0fed7a608" Nov 25 08:40:46 crc kubenswrapper[5043]: I1125 08:40:46.056463 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" event={"ID":"cdbab2e0-494c-4845-a500-88b26934f1c7","Type":"ContainerStarted","Data":"0b8c8cf040f41f6c528fd644d946c36809ce99c71a8d38c81dada5dee023723d"} Nov 25 08:40:46 crc kubenswrapper[5043]: I1125 08:40:46.056929 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 08:40:46 crc kubenswrapper[5043]: I1125 08:40:46.970147 5043 scope.go:117] "RemoveContainer" containerID="e0911531dc203ed99d2439dde2e6250a2c2a80bc2438482addb38f79a429e302" Nov 25 08:40:47 crc kubenswrapper[5043]: I1125 08:40:47.077030 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" event={"ID":"cdc9a1bf-b6d9-4a36-bcf8-55f87525da45","Type":"ContainerStarted","Data":"c31f335afbed60ba91f797a6c17f1a3fcd11140b19c2ac032cf755bf60d6b01c"} Nov 25 08:40:47 crc kubenswrapper[5043]: I1125 08:40:47.077474 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" Nov 25 08:40:47 crc kubenswrapper[5043]: I1125 08:40:47.963482 5043 scope.go:117] "RemoveContainer" containerID="d8436e4d166100ec637a616180ee63f0bc71223447501e942c26b8c14fa7148e" Nov 25 08:40:47 crc kubenswrapper[5043]: I1125 08:40:47.964232 5043 scope.go:117] "RemoveContainer" containerID="667784fc2ebb65a015db42adb0de7043f594062fbe029db178e186a660be9ff4" Nov 25 08:40:48 crc kubenswrapper[5043]: I1125 08:40:48.089811 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" event={"ID":"e020a857-3730-44f5-8e98-3e59868fbde6","Type":"ContainerStarted","Data":"594326d1af441a7904042a4cd26372b28458df12116091bd49344e6fb5595c84"} Nov 25 08:40:48 crc kubenswrapper[5043]: I1125 08:40:48.090311 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" Nov 25 08:40:48 crc kubenswrapper[5043]: I1125 08:40:48.962917 5043 scope.go:117] "RemoveContainer" containerID="95a4900d5af42dd9fe7c1617ad1964399420f63d1dde275ef7536d35340e4e16" Nov 25 08:40:48 crc kubenswrapper[5043]: I1125 08:40:48.965464 5043 scope.go:117] "RemoveContainer" containerID="8897b33013bb79072d9ea8749d059b049dbbac21bf0f1bbf145e3295ad4fb8b4" Nov 25 08:40:49 crc kubenswrapper[5043]: I1125 08:40:49.099586 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" event={"ID":"ff874d31-8e5a-4c0b-8f9c-e63513a00483","Type":"ContainerStarted","Data":"7f14e2d19933f5c2bb97d6c52b260f0835f62e4c1e616f9e1e05fbd6bc17db57"} Nov 25 08:40:49 crc kubenswrapper[5043]: I1125 08:40:49.099823 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" Nov 25 08:40:49 crc kubenswrapper[5043]: I1125 08:40:49.102806 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" event={"ID":"8a93d5b1-742c-4a37-94ef-a60ffb008520","Type":"ContainerStarted","Data":"84afc121781295826ad9727516a73bd7f0fdbaca25787cad99949eef7d87d2ee"} Nov 25 08:40:49 crc kubenswrapper[5043]: I1125 08:40:49.103492 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" Nov 25 08:40:49 crc kubenswrapper[5043]: I1125 08:40:49.963011 5043 scope.go:117] "RemoveContainer" containerID="c14df4fe99b085e2fc3303f85ec50867148f9864f75a443bf051a54ec4846b48" Nov 25 08:40:49 crc kubenswrapper[5043]: I1125 08:40:49.963382 5043 scope.go:117] "RemoveContainer" containerID="57960d4a1ce355a1267f014aa4b1d651e4a2e59d4c89930ddeefb026e11d198d" Nov 25 08:40:50 crc kubenswrapper[5043]: I1125 08:40:50.115697 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" event={"ID":"b7005e58-64d2-470b-a3e7-22b67b7fbfb3","Type":"ContainerStarted","Data":"6c48520605c162db803542f37716a7d706eee356236e1b2b7e3aabdacf75a5e6"} Nov 25 08:40:50 crc kubenswrapper[5043]: I1125 08:40:50.116008 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" Nov 25 08:40:50 crc kubenswrapper[5043]: I1125 08:40:50.120067 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" event={"ID":"c20803a7-e9a9-441a-9e61-84673f3c02e8","Type":"ContainerStarted","Data":"ae5d4469af09b8c299797d1a0d5b9007a85e07c99966596194279301a985ca28"} Nov 25 08:40:50 crc kubenswrapper[5043]: I1125 08:40:50.120454 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" Nov 25 08:40:50 crc kubenswrapper[5043]: I1125 08:40:50.964700 5043 scope.go:117] "RemoveContainer" containerID="bdb456d5ed1e0052a434ec399ada73c55aff2d468a2de20332c2b20810ea6454" Nov 25 08:40:51 crc kubenswrapper[5043]: I1125 08:40:51.132046 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" event={"ID":"e5c62587-28b4-4a1e-8b73-ee9624ca7163","Type":"ContainerStarted","Data":"49d36c9339e60144d3705c7e0912491833267704e3e2baf8223e8dabcd7fc32b"} Nov 25 08:40:51 crc kubenswrapper[5043]: I1125 08:40:51.132296 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" Nov 25 08:40:51 crc kubenswrapper[5043]: I1125 08:40:51.140807 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" event={"ID":"c924fa47-53fb-4edc-8214-667ba1858ca2","Type":"ContainerStarted","Data":"d8dfe2c24836ec70ad3d68452c4f0c58987c7213934343ec5c413bc2161bf750"} Nov 25 08:40:52 crc kubenswrapper[5043]: I1125 08:40:52.153049 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" event={"ID":"d9a368e6-f4bb-4896-9a2d-f7ceed65e933","Type":"ContainerStarted","Data":"fe22b1e608514872f56f56f9febeeaaac0149b33437c3b9ed1ccb987ab684580"} Nov 25 08:40:52 crc kubenswrapper[5043]: I1125 08:40:52.153946 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" Nov 25 08:40:55 crc kubenswrapper[5043]: I1125 08:40:55.593589 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-pnq4k" Nov 25 08:40:55 crc kubenswrapper[5043]: I1125 08:40:55.637147 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-5mp5h" Nov 25 08:40:55 crc kubenswrapper[5043]: I1125 08:40:55.667621 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-nnpzz" Nov 25 08:40:55 crc kubenswrapper[5043]: I1125 08:40:55.697949 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-l77gb" Nov 25 08:40:55 crc kubenswrapper[5043]: I1125 08:40:55.714369 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-wmkmw" Nov 25 08:40:55 crc kubenswrapper[5043]: I1125 08:40:55.796204 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sgz96" Nov 25 08:40:55 crc kubenswrapper[5043]: I1125 08:40:55.811145 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" Nov 25 08:40:55 crc kubenswrapper[5043]: I1125 08:40:55.813399 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-gvwj8" Nov 25 08:40:55 crc kubenswrapper[5043]: I1125 08:40:55.816334 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xx8rb" Nov 25 08:40:56 crc kubenswrapper[5043]: I1125 08:40:56.068510 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d2nqf"] Nov 25 08:40:56 crc kubenswrapper[5043]: E1125 08:40:56.069387 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 08:40:56 crc kubenswrapper[5043]: I1125 08:40:56.069405 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 08:40:56 crc kubenswrapper[5043]: E1125 08:40:56.069445 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" containerName="installer" Nov 25 08:40:56 crc kubenswrapper[5043]: I1125 08:40:56.069452 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" containerName="installer" Nov 25 08:40:56 crc kubenswrapper[5043]: I1125 08:40:56.069643 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 08:40:56 crc kubenswrapper[5043]: I1125 08:40:56.069668 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7008eaa-f7ad-4e0b-9818-c0cb27cdf50c" containerName="installer" Nov 25 08:40:56 crc kubenswrapper[5043]: I1125 08:40:56.071056 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2nqf" Nov 25 08:40:56 crc kubenswrapper[5043]: I1125 08:40:56.096355 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2nqf"] Nov 25 08:40:56 crc kubenswrapper[5043]: I1125 08:40:56.146988 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2cc188c-3a39-4866-85dd-d34de56fb8f6-catalog-content\") pod \"certified-operators-d2nqf\" (UID: \"e2cc188c-3a39-4866-85dd-d34de56fb8f6\") " pod="openshift-marketplace/certified-operators-d2nqf" Nov 25 08:40:56 crc kubenswrapper[5043]: I1125 08:40:56.147053 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk5tt\" (UniqueName: \"kubernetes.io/projected/e2cc188c-3a39-4866-85dd-d34de56fb8f6-kube-api-access-tk5tt\") pod \"certified-operators-d2nqf\" (UID: \"e2cc188c-3a39-4866-85dd-d34de56fb8f6\") " pod="openshift-marketplace/certified-operators-d2nqf" Nov 25 08:40:56 crc kubenswrapper[5043]: I1125 08:40:56.147243 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2cc188c-3a39-4866-85dd-d34de56fb8f6-utilities\") pod \"certified-operators-d2nqf\" (UID: \"e2cc188c-3a39-4866-85dd-d34de56fb8f6\") " pod="openshift-marketplace/certified-operators-d2nqf" Nov 25 08:40:56 crc kubenswrapper[5043]: I1125 08:40:56.249582 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2cc188c-3a39-4866-85dd-d34de56fb8f6-catalog-content\") pod \"certified-operators-d2nqf\" (UID: \"e2cc188c-3a39-4866-85dd-d34de56fb8f6\") " pod="openshift-marketplace/certified-operators-d2nqf" Nov 25 08:40:56 crc kubenswrapper[5043]: I1125 08:40:56.249665 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk5tt\" (UniqueName: \"kubernetes.io/projected/e2cc188c-3a39-4866-85dd-d34de56fb8f6-kube-api-access-tk5tt\") pod \"certified-operators-d2nqf\" (UID: \"e2cc188c-3a39-4866-85dd-d34de56fb8f6\") " pod="openshift-marketplace/certified-operators-d2nqf" Nov 25 08:40:56 crc kubenswrapper[5043]: I1125 08:40:56.249818 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2cc188c-3a39-4866-85dd-d34de56fb8f6-utilities\") pod \"certified-operators-d2nqf\" (UID: \"e2cc188c-3a39-4866-85dd-d34de56fb8f6\") " pod="openshift-marketplace/certified-operators-d2nqf" Nov 25 08:40:56 crc kubenswrapper[5043]: I1125 08:40:56.250176 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2cc188c-3a39-4866-85dd-d34de56fb8f6-catalog-content\") pod \"certified-operators-d2nqf\" (UID: \"e2cc188c-3a39-4866-85dd-d34de56fb8f6\") " pod="openshift-marketplace/certified-operators-d2nqf" Nov 25 08:40:56 crc kubenswrapper[5043]: I1125 08:40:56.250271 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2cc188c-3a39-4866-85dd-d34de56fb8f6-utilities\") pod \"certified-operators-d2nqf\" (UID: \"e2cc188c-3a39-4866-85dd-d34de56fb8f6\") " pod="openshift-marketplace/certified-operators-d2nqf" Nov 25 08:40:56 crc kubenswrapper[5043]: I1125 08:40:56.283469 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk5tt\" (UniqueName: \"kubernetes.io/projected/e2cc188c-3a39-4866-85dd-d34de56fb8f6-kube-api-access-tk5tt\") pod \"certified-operators-d2nqf\" (UID: \"e2cc188c-3a39-4866-85dd-d34de56fb8f6\") " pod="openshift-marketplace/certified-operators-d2nqf" Nov 25 08:40:56 crc kubenswrapper[5043]: I1125 08:40:56.400710 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2nqf" Nov 25 08:40:57 crc kubenswrapper[5043]: I1125 08:40:57.111186 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2nqf"] Nov 25 08:40:57 crc kubenswrapper[5043]: I1125 08:40:57.212121 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2nqf" event={"ID":"e2cc188c-3a39-4866-85dd-d34de56fb8f6","Type":"ContainerStarted","Data":"a7c42d89a82e13fd5544c142602c9d738ae1336a0af1986dc2ce0adb5646873c"} Nov 25 08:40:57 crc kubenswrapper[5043]: I1125 08:40:57.964066 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:40:57 crc kubenswrapper[5043]: E1125 08:40:57.964646 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:40:58 crc kubenswrapper[5043]: I1125 08:40:58.225834 5043 generic.go:334] "Generic (PLEG): container finished" podID="e2cc188c-3a39-4866-85dd-d34de56fb8f6" containerID="e2ae7fb94e96eaa0b191a336db2ab17cd04b0fed191c7b920a553c35b709dd96" exitCode=0 Nov 25 08:40:58 crc kubenswrapper[5043]: I1125 08:40:58.225883 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2nqf" event={"ID":"e2cc188c-3a39-4866-85dd-d34de56fb8f6","Type":"ContainerDied","Data":"e2ae7fb94e96eaa0b191a336db2ab17cd04b0fed191c7b920a553c35b709dd96"} Nov 25 08:40:59 crc kubenswrapper[5043]: I1125 08:40:59.238150 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2nqf" event={"ID":"e2cc188c-3a39-4866-85dd-d34de56fb8f6","Type":"ContainerStarted","Data":"59cedd31bc371a7d8b246a1cb7a03c2c52a7f2c1cff07feebc93605c54bfb977"} Nov 25 08:41:00 crc kubenswrapper[5043]: I1125 08:41:00.249105 5043 generic.go:334] "Generic (PLEG): container finished" podID="e2cc188c-3a39-4866-85dd-d34de56fb8f6" containerID="59cedd31bc371a7d8b246a1cb7a03c2c52a7f2c1cff07feebc93605c54bfb977" exitCode=0 Nov 25 08:41:00 crc kubenswrapper[5043]: I1125 08:41:00.249304 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2nqf" event={"ID":"e2cc188c-3a39-4866-85dd-d34de56fb8f6","Type":"ContainerDied","Data":"59cedd31bc371a7d8b246a1cb7a03c2c52a7f2c1cff07feebc93605c54bfb977"} Nov 25 08:41:01 crc kubenswrapper[5043]: I1125 08:41:01.260619 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2nqf" event={"ID":"e2cc188c-3a39-4866-85dd-d34de56fb8f6","Type":"ContainerStarted","Data":"c7019c5300df24961fd1d5f5d1935d23d1d7d598bd1bc7efd63bef93559e0f4c"} Nov 25 08:41:01 crc kubenswrapper[5043]: I1125 08:41:01.284753 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d2nqf" podStartSLOduration=2.889873197 podStartE2EDuration="5.284731512s" podCreationTimestamp="2025-11-25 08:40:56 +0000 UTC" firstStartedPulling="2025-11-25 08:40:58.228012539 +0000 UTC m=+5122.396208250" lastFinishedPulling="2025-11-25 08:41:00.622870844 +0000 UTC m=+5124.791066565" observedRunningTime="2025-11-25 08:41:01.275240786 +0000 UTC m=+5125.443436507" watchObservedRunningTime="2025-11-25 08:41:01.284731512 +0000 UTC m=+5125.452927233" Nov 25 08:41:05 crc kubenswrapper[5043]: I1125 08:41:05.577924 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-dtcj4" Nov 25 08:41:06 crc kubenswrapper[5043]: I1125 08:41:06.400903 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d2nqf" Nov 25 08:41:06 crc kubenswrapper[5043]: I1125 08:41:06.401048 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d2nqf" Nov 25 08:41:06 crc kubenswrapper[5043]: I1125 08:41:06.458949 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d2nqf" Nov 25 08:41:07 crc kubenswrapper[5043]: I1125 08:41:07.376944 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d2nqf" Nov 25 08:41:11 crc kubenswrapper[5043]: I1125 08:41:11.964636 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:41:11 crc kubenswrapper[5043]: E1125 08:41:11.965361 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.109260 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6kbt7"] Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.111759 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kbt7" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.134409 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6kbt7"] Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.165351 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6mv4\" (UniqueName: \"kubernetes.io/projected/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-kube-api-access-b6mv4\") pod \"certified-operators-6kbt7\" (UID: \"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9\") " pod="openshift-marketplace/certified-operators-6kbt7" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.165960 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-utilities\") pod \"certified-operators-6kbt7\" (UID: \"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9\") " pod="openshift-marketplace/certified-operators-6kbt7" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.166047 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-catalog-content\") pod \"certified-operators-6kbt7\" (UID: \"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9\") " pod="openshift-marketplace/certified-operators-6kbt7" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.267758 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-utilities\") pod \"certified-operators-6kbt7\" (UID: \"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9\") " pod="openshift-marketplace/certified-operators-6kbt7" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.267829 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-catalog-content\") pod \"certified-operators-6kbt7\" (UID: \"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9\") " pod="openshift-marketplace/certified-operators-6kbt7" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.267898 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6mv4\" (UniqueName: \"kubernetes.io/projected/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-kube-api-access-b6mv4\") pod \"certified-operators-6kbt7\" (UID: \"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9\") " pod="openshift-marketplace/certified-operators-6kbt7" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.268465 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-utilities\") pod \"certified-operators-6kbt7\" (UID: \"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9\") " pod="openshift-marketplace/certified-operators-6kbt7" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.268594 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-catalog-content\") pod \"certified-operators-6kbt7\" (UID: \"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9\") " pod="openshift-marketplace/certified-operators-6kbt7" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.308187 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6mv4\" (UniqueName: \"kubernetes.io/projected/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-kube-api-access-b6mv4\") pod \"certified-operators-6kbt7\" (UID: \"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9\") " pod="openshift-marketplace/certified-operators-6kbt7" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.446803 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kbt7" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.503843 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-556m5"] Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.505834 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-556m5" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.571671 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-556m5"] Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.574823 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn88p\" (UniqueName: \"kubernetes.io/projected/a92480af-246f-4335-a619-2a0f2aba7934-kube-api-access-gn88p\") pod \"certified-operators-556m5\" (UID: \"a92480af-246f-4335-a619-2a0f2aba7934\") " pod="openshift-marketplace/certified-operators-556m5" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.574870 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92480af-246f-4335-a619-2a0f2aba7934-catalog-content\") pod \"certified-operators-556m5\" (UID: \"a92480af-246f-4335-a619-2a0f2aba7934\") " pod="openshift-marketplace/certified-operators-556m5" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.574904 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92480af-246f-4335-a619-2a0f2aba7934-utilities\") pod \"certified-operators-556m5\" (UID: \"a92480af-246f-4335-a619-2a0f2aba7934\") " pod="openshift-marketplace/certified-operators-556m5" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.678046 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92480af-246f-4335-a619-2a0f2aba7934-utilities\") pod \"certified-operators-556m5\" (UID: \"a92480af-246f-4335-a619-2a0f2aba7934\") " pod="openshift-marketplace/certified-operators-556m5" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.679184 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn88p\" (UniqueName: \"kubernetes.io/projected/a92480af-246f-4335-a619-2a0f2aba7934-kube-api-access-gn88p\") pod \"certified-operators-556m5\" (UID: \"a92480af-246f-4335-a619-2a0f2aba7934\") " pod="openshift-marketplace/certified-operators-556m5" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.679255 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92480af-246f-4335-a619-2a0f2aba7934-catalog-content\") pod \"certified-operators-556m5\" (UID: \"a92480af-246f-4335-a619-2a0f2aba7934\") " pod="openshift-marketplace/certified-operators-556m5" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.680059 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92480af-246f-4335-a619-2a0f2aba7934-utilities\") pod \"certified-operators-556m5\" (UID: \"a92480af-246f-4335-a619-2a0f2aba7934\") " pod="openshift-marketplace/certified-operators-556m5" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.680101 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92480af-246f-4335-a619-2a0f2aba7934-catalog-content\") pod \"certified-operators-556m5\" (UID: \"a92480af-246f-4335-a619-2a0f2aba7934\") " pod="openshift-marketplace/certified-operators-556m5" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.700309 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn88p\" (UniqueName: \"kubernetes.io/projected/a92480af-246f-4335-a619-2a0f2aba7934-kube-api-access-gn88p\") pod \"certified-operators-556m5\" (UID: \"a92480af-246f-4335-a619-2a0f2aba7934\") " pod="openshift-marketplace/certified-operators-556m5" Nov 25 08:41:16 crc kubenswrapper[5043]: I1125 08:41:16.913724 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-556m5" Nov 25 08:41:17 crc kubenswrapper[5043]: I1125 08:41:17.000673 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6kbt7"] Nov 25 08:41:17 crc kubenswrapper[5043]: W1125 08:41:17.001064 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30d87bfd_b6ac_45f1_b9c5_e39453d0eca9.slice/crio-8678af2e0a130abfbc3db3d0807dffb4a502e9b8d8ed10f80bfa8568036a2039 WatchSource:0}: Error finding container 8678af2e0a130abfbc3db3d0807dffb4a502e9b8d8ed10f80bfa8568036a2039: Status 404 returned error can't find the container with id 8678af2e0a130abfbc3db3d0807dffb4a502e9b8d8ed10f80bfa8568036a2039 Nov 25 08:41:17 crc kubenswrapper[5043]: I1125 08:41:17.440829 5043 generic.go:334] "Generic (PLEG): container finished" podID="30d87bfd-b6ac-45f1-b9c5-e39453d0eca9" containerID="82d9b8ace34401784056ae17a630206c3d7e0ffcedeff60b9d34a8de98977cef" exitCode=0 Nov 25 08:41:17 crc kubenswrapper[5043]: I1125 08:41:17.441038 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kbt7" event={"ID":"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9","Type":"ContainerDied","Data":"82d9b8ace34401784056ae17a630206c3d7e0ffcedeff60b9d34a8de98977cef"} Nov 25 08:41:17 crc kubenswrapper[5043]: I1125 08:41:17.441062 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kbt7" event={"ID":"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9","Type":"ContainerStarted","Data":"8678af2e0a130abfbc3db3d0807dffb4a502e9b8d8ed10f80bfa8568036a2039"} Nov 25 08:41:17 crc kubenswrapper[5043]: I1125 08:41:17.441192 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-556m5"] Nov 25 08:41:17 crc kubenswrapper[5043]: W1125 08:41:17.447221 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda92480af_246f_4335_a619_2a0f2aba7934.slice/crio-c2188fd603c517bf4cfd87e4b7b897d25862ecf5ba6c0d167f714620fe1b0347 WatchSource:0}: Error finding container c2188fd603c517bf4cfd87e4b7b897d25862ecf5ba6c0d167f714620fe1b0347: Status 404 returned error can't find the container with id c2188fd603c517bf4cfd87e4b7b897d25862ecf5ba6c0d167f714620fe1b0347 Nov 25 08:41:17 crc kubenswrapper[5043]: I1125 08:41:17.704000 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kvswk"] Nov 25 08:41:17 crc kubenswrapper[5043]: I1125 08:41:17.706951 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvswk" Nov 25 08:41:17 crc kubenswrapper[5043]: I1125 08:41:17.720880 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kvswk"] Nov 25 08:41:17 crc kubenswrapper[5043]: I1125 08:41:17.810409 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-utilities\") pod \"certified-operators-kvswk\" (UID: \"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21\") " pod="openshift-marketplace/certified-operators-kvswk" Nov 25 08:41:17 crc kubenswrapper[5043]: I1125 08:41:17.810514 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgwgj\" (UniqueName: \"kubernetes.io/projected/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-kube-api-access-rgwgj\") pod \"certified-operators-kvswk\" (UID: \"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21\") " pod="openshift-marketplace/certified-operators-kvswk" Nov 25 08:41:17 crc kubenswrapper[5043]: I1125 08:41:17.810548 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-catalog-content\") pod \"certified-operators-kvswk\" (UID: \"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21\") " pod="openshift-marketplace/certified-operators-kvswk" Nov 25 08:41:17 crc kubenswrapper[5043]: I1125 08:41:17.912998 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgwgj\" (UniqueName: \"kubernetes.io/projected/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-kube-api-access-rgwgj\") pod \"certified-operators-kvswk\" (UID: \"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21\") " pod="openshift-marketplace/certified-operators-kvswk" Nov 25 08:41:17 crc kubenswrapper[5043]: I1125 08:41:17.913046 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-catalog-content\") pod \"certified-operators-kvswk\" (UID: \"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21\") " pod="openshift-marketplace/certified-operators-kvswk" Nov 25 08:41:17 crc kubenswrapper[5043]: I1125 08:41:17.913201 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-utilities\") pod \"certified-operators-kvswk\" (UID: \"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21\") " pod="openshift-marketplace/certified-operators-kvswk" Nov 25 08:41:17 crc kubenswrapper[5043]: I1125 08:41:17.913598 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-catalog-content\") pod \"certified-operators-kvswk\" (UID: \"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21\") " pod="openshift-marketplace/certified-operators-kvswk" Nov 25 08:41:17 crc kubenswrapper[5043]: I1125 08:41:17.913642 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-utilities\") pod \"certified-operators-kvswk\" (UID: \"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21\") " pod="openshift-marketplace/certified-operators-kvswk" Nov 25 08:41:18 crc kubenswrapper[5043]: I1125 08:41:18.365527 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgwgj\" (UniqueName: \"kubernetes.io/projected/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-kube-api-access-rgwgj\") pod \"certified-operators-kvswk\" (UID: \"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21\") " pod="openshift-marketplace/certified-operators-kvswk" Nov 25 08:41:18 crc kubenswrapper[5043]: I1125 08:41:18.368524 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvswk" Nov 25 08:41:18 crc kubenswrapper[5043]: I1125 08:41:18.453523 5043 generic.go:334] "Generic (PLEG): container finished" podID="a92480af-246f-4335-a619-2a0f2aba7934" containerID="176b603b8f9e87a9a27068ed45087b7da0cd0dd62fdc2523fb59e2945a1250ff" exitCode=0 Nov 25 08:41:18 crc kubenswrapper[5043]: I1125 08:41:18.453574 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-556m5" event={"ID":"a92480af-246f-4335-a619-2a0f2aba7934","Type":"ContainerDied","Data":"176b603b8f9e87a9a27068ed45087b7da0cd0dd62fdc2523fb59e2945a1250ff"} Nov 25 08:41:18 crc kubenswrapper[5043]: I1125 08:41:18.453623 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-556m5" event={"ID":"a92480af-246f-4335-a619-2a0f2aba7934","Type":"ContainerStarted","Data":"c2188fd603c517bf4cfd87e4b7b897d25862ecf5ba6c0d167f714620fe1b0347"} Nov 25 08:41:18 crc kubenswrapper[5043]: I1125 08:41:18.904915 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6wq9s"] Nov 25 08:41:18 crc kubenswrapper[5043]: I1125 08:41:18.907719 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6wq9s" Nov 25 08:41:18 crc kubenswrapper[5043]: I1125 08:41:18.917493 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6wq9s"] Nov 25 08:41:18 crc kubenswrapper[5043]: I1125 08:41:18.934966 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv9gb\" (UniqueName: \"kubernetes.io/projected/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-kube-api-access-tv9gb\") pod \"certified-operators-6wq9s\" (UID: \"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0\") " pod="openshift-marketplace/certified-operators-6wq9s" Nov 25 08:41:18 crc kubenswrapper[5043]: I1125 08:41:18.935440 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-catalog-content\") pod \"certified-operators-6wq9s\" (UID: \"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0\") " pod="openshift-marketplace/certified-operators-6wq9s" Nov 25 08:41:18 crc kubenswrapper[5043]: I1125 08:41:18.935489 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-utilities\") pod \"certified-operators-6wq9s\" (UID: \"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0\") " pod="openshift-marketplace/certified-operators-6wq9s" Nov 25 08:41:18 crc kubenswrapper[5043]: I1125 08:41:18.985954 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kvswk"] Nov 25 08:41:19 crc kubenswrapper[5043]: I1125 08:41:19.036971 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-catalog-content\") pod \"certified-operators-6wq9s\" (UID: \"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0\") " pod="openshift-marketplace/certified-operators-6wq9s" Nov 25 08:41:19 crc kubenswrapper[5043]: I1125 08:41:19.037046 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-utilities\") pod \"certified-operators-6wq9s\" (UID: \"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0\") " pod="openshift-marketplace/certified-operators-6wq9s" Nov 25 08:41:19 crc kubenswrapper[5043]: I1125 08:41:19.037124 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv9gb\" (UniqueName: \"kubernetes.io/projected/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-kube-api-access-tv9gb\") pod \"certified-operators-6wq9s\" (UID: \"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0\") " pod="openshift-marketplace/certified-operators-6wq9s" Nov 25 08:41:19 crc kubenswrapper[5043]: I1125 08:41:19.037891 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-catalog-content\") pod \"certified-operators-6wq9s\" (UID: \"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0\") " pod="openshift-marketplace/certified-operators-6wq9s" Nov 25 08:41:19 crc kubenswrapper[5043]: I1125 08:41:19.038010 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-utilities\") pod \"certified-operators-6wq9s\" (UID: \"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0\") " pod="openshift-marketplace/certified-operators-6wq9s" Nov 25 08:41:19 crc kubenswrapper[5043]: I1125 08:41:19.056837 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv9gb\" (UniqueName: \"kubernetes.io/projected/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-kube-api-access-tv9gb\") pod \"certified-operators-6wq9s\" (UID: \"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0\") " pod="openshift-marketplace/certified-operators-6wq9s" Nov 25 08:41:19 crc kubenswrapper[5043]: I1125 08:41:19.082742 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-85bdd6cc97-lrkkr" Nov 25 08:41:19 crc kubenswrapper[5043]: I1125 08:41:19.259734 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6wq9s" Nov 25 08:41:19 crc kubenswrapper[5043]: I1125 08:41:19.509986 5043 generic.go:334] "Generic (PLEG): container finished" podID="30d87bfd-b6ac-45f1-b9c5-e39453d0eca9" containerID="46aeeaa098b89509b649c1fb7cafdb21c657cee32604cff52878a460c3b1024b" exitCode=0 Nov 25 08:41:19 crc kubenswrapper[5043]: I1125 08:41:19.511281 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kbt7" event={"ID":"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9","Type":"ContainerDied","Data":"46aeeaa098b89509b649c1fb7cafdb21c657cee32604cff52878a460c3b1024b"} Nov 25 08:41:19 crc kubenswrapper[5043]: I1125 08:41:19.524961 5043 generic.go:334] "Generic (PLEG): container finished" podID="b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21" containerID="6875ffc6df220f25ea6908250b67325a19ec8b25fd376d971ef8be515a614bc7" exitCode=0 Nov 25 08:41:19 crc kubenswrapper[5043]: I1125 08:41:19.525037 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvswk" event={"ID":"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21","Type":"ContainerDied","Data":"6875ffc6df220f25ea6908250b67325a19ec8b25fd376d971ef8be515a614bc7"} Nov 25 08:41:19 crc kubenswrapper[5043]: I1125 08:41:19.525068 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvswk" event={"ID":"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21","Type":"ContainerStarted","Data":"30c9c61260b4c8bac4a71212f120d2f2fd5ace4d0a6a7b4196a28d564cc2c198"} Nov 25 08:41:19 crc kubenswrapper[5043]: I1125 08:41:19.530245 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-556m5" event={"ID":"a92480af-246f-4335-a619-2a0f2aba7934","Type":"ContainerStarted","Data":"224b0b32e90e96fbb6944502a36d2a94e38186a0a13eea80eded80dc3c35c4e2"} Nov 25 08:41:19 crc kubenswrapper[5043]: I1125 08:41:19.896156 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6wq9s"] Nov 25 08:41:20 crc kubenswrapper[5043]: W1125 08:41:20.271136 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83f6e3a6_bb5e_4378_aa54_ddf6fb0514b0.slice/crio-8ac2fb22969b31933f29ff31a611cf55d92e9686d0f56a916d3214ebd2e5b816 WatchSource:0}: Error finding container 8ac2fb22969b31933f29ff31a611cf55d92e9686d0f56a916d3214ebd2e5b816: Status 404 returned error can't find the container with id 8ac2fb22969b31933f29ff31a611cf55d92e9686d0f56a916d3214ebd2e5b816 Nov 25 08:41:20 crc kubenswrapper[5043]: I1125 08:41:20.540065 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6wq9s" event={"ID":"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0","Type":"ContainerStarted","Data":"8ac2fb22969b31933f29ff31a611cf55d92e9686d0f56a916d3214ebd2e5b816"} Nov 25 08:41:20 crc kubenswrapper[5043]: I1125 08:41:20.541976 5043 generic.go:334] "Generic (PLEG): container finished" podID="a92480af-246f-4335-a619-2a0f2aba7934" containerID="224b0b32e90e96fbb6944502a36d2a94e38186a0a13eea80eded80dc3c35c4e2" exitCode=0 Nov 25 08:41:20 crc kubenswrapper[5043]: I1125 08:41:20.542020 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-556m5" event={"ID":"a92480af-246f-4335-a619-2a0f2aba7934","Type":"ContainerDied","Data":"224b0b32e90e96fbb6944502a36d2a94e38186a0a13eea80eded80dc3c35c4e2"} Nov 25 08:41:21 crc kubenswrapper[5043]: I1125 08:41:21.552207 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvswk" event={"ID":"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21","Type":"ContainerStarted","Data":"20177aee51bab440af01f427bedde0340edf325e777e11790326776c9eb7edd6"} Nov 25 08:41:21 crc kubenswrapper[5043]: I1125 08:41:21.554148 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6wq9s" event={"ID":"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0","Type":"ContainerStarted","Data":"1e0e29859339574478162881ed7b58cfb8e40c91a1ca3d0a4d57e32e2f490884"} Nov 25 08:41:21 crc kubenswrapper[5043]: I1125 08:41:21.556024 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-556m5" event={"ID":"a92480af-246f-4335-a619-2a0f2aba7934","Type":"ContainerStarted","Data":"12365b8376e4c0ea3eddbf04881e8e4f56c13d32f160aac00ecce1f25224a361"} Nov 25 08:41:21 crc kubenswrapper[5043]: I1125 08:41:21.557811 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kbt7" event={"ID":"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9","Type":"ContainerStarted","Data":"3c17c42b63d4595052e653abddbd6dea8b42bad5d979fd3dbe59d4826a907dd3"} Nov 25 08:41:21 crc kubenswrapper[5043]: I1125 08:41:21.631659 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6kbt7" podStartSLOduration=2.683899706 podStartE2EDuration="5.631639587s" podCreationTimestamp="2025-11-25 08:41:16 +0000 UTC" firstStartedPulling="2025-11-25 08:41:17.445306647 +0000 UTC m=+5141.613502368" lastFinishedPulling="2025-11-25 08:41:20.393046528 +0000 UTC m=+5144.561242249" observedRunningTime="2025-11-25 08:41:21.622594883 +0000 UTC m=+5145.790790594" watchObservedRunningTime="2025-11-25 08:41:21.631639587 +0000 UTC m=+5145.799835328" Nov 25 08:41:21 crc kubenswrapper[5043]: I1125 08:41:21.640898 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-556m5" podStartSLOduration=2.950613001 podStartE2EDuration="5.640878456s" podCreationTimestamp="2025-11-25 08:41:16 +0000 UTC" firstStartedPulling="2025-11-25 08:41:18.45615376 +0000 UTC m=+5142.624349481" lastFinishedPulling="2025-11-25 08:41:21.146419215 +0000 UTC m=+5145.314614936" observedRunningTime="2025-11-25 08:41:21.637908336 +0000 UTC m=+5145.806104057" watchObservedRunningTime="2025-11-25 08:41:21.640878456 +0000 UTC m=+5145.809074177" Nov 25 08:41:22 crc kubenswrapper[5043]: I1125 08:41:22.566922 5043 generic.go:334] "Generic (PLEG): container finished" podID="83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0" containerID="1e0e29859339574478162881ed7b58cfb8e40c91a1ca3d0a4d57e32e2f490884" exitCode=0 Nov 25 08:41:22 crc kubenswrapper[5043]: I1125 08:41:22.567030 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6wq9s" event={"ID":"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0","Type":"ContainerDied","Data":"1e0e29859339574478162881ed7b58cfb8e40c91a1ca3d0a4d57e32e2f490884"} Nov 25 08:41:22 crc kubenswrapper[5043]: I1125 08:41:22.569459 5043 generic.go:334] "Generic (PLEG): container finished" podID="b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21" containerID="20177aee51bab440af01f427bedde0340edf325e777e11790326776c9eb7edd6" exitCode=0 Nov 25 08:41:22 crc kubenswrapper[5043]: I1125 08:41:22.569551 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvswk" event={"ID":"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21","Type":"ContainerDied","Data":"20177aee51bab440af01f427bedde0340edf325e777e11790326776c9eb7edd6"} Nov 25 08:41:22 crc kubenswrapper[5043]: I1125 08:41:22.963969 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:41:22 crc kubenswrapper[5043]: E1125 08:41:22.964451 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:41:24 crc kubenswrapper[5043]: I1125 08:41:24.590213 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvswk" event={"ID":"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21","Type":"ContainerStarted","Data":"34f467f77d7755fe897720cb95e9d8f527bb1e56f7bdb98174419e2b6457b861"} Nov 25 08:41:24 crc kubenswrapper[5043]: I1125 08:41:24.594378 5043 generic.go:334] "Generic (PLEG): container finished" podID="83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0" containerID="7e2e50d23dcc8b5bf7e5b3260b787201de95f917478e8d0681aa872d11e061ff" exitCode=0 Nov 25 08:41:24 crc kubenswrapper[5043]: I1125 08:41:24.594419 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6wq9s" event={"ID":"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0","Type":"ContainerDied","Data":"7e2e50d23dcc8b5bf7e5b3260b787201de95f917478e8d0681aa872d11e061ff"} Nov 25 08:41:24 crc kubenswrapper[5043]: I1125 08:41:24.614454 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kvswk" podStartSLOduration=3.6768588749999997 podStartE2EDuration="7.614439585s" podCreationTimestamp="2025-11-25 08:41:17 +0000 UTC" firstStartedPulling="2025-11-25 08:41:19.526843717 +0000 UTC m=+5143.695039438" lastFinishedPulling="2025-11-25 08:41:23.464424427 +0000 UTC m=+5147.632620148" observedRunningTime="2025-11-25 08:41:24.614122697 +0000 UTC m=+5148.782318438" watchObservedRunningTime="2025-11-25 08:41:24.614439585 +0000 UTC m=+5148.782635306" Nov 25 08:41:25 crc kubenswrapper[5043]: I1125 08:41:25.635708 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6wq9s" event={"ID":"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0","Type":"ContainerStarted","Data":"a58d27d65187dc17d99b1374cbd722fd3681950da4f02e65f8740ddae333332b"} Nov 25 08:41:26 crc kubenswrapper[5043]: I1125 08:41:26.447956 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6kbt7" Nov 25 08:41:26 crc kubenswrapper[5043]: I1125 08:41:26.448321 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6kbt7" Nov 25 08:41:26 crc kubenswrapper[5043]: I1125 08:41:26.499963 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6kbt7" Nov 25 08:41:26 crc kubenswrapper[5043]: I1125 08:41:26.521034 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6wq9s" podStartSLOduration=6.093683965 podStartE2EDuration="8.521013716s" podCreationTimestamp="2025-11-25 08:41:18 +0000 UTC" firstStartedPulling="2025-11-25 08:41:22.569911192 +0000 UTC m=+5146.738106923" lastFinishedPulling="2025-11-25 08:41:24.997240953 +0000 UTC m=+5149.165436674" observedRunningTime="2025-11-25 08:41:25.681992588 +0000 UTC m=+5149.850188319" watchObservedRunningTime="2025-11-25 08:41:26.521013716 +0000 UTC m=+5150.689209437" Nov 25 08:41:26 crc kubenswrapper[5043]: I1125 08:41:26.705651 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6kbt7" Nov 25 08:41:26 crc kubenswrapper[5043]: I1125 08:41:26.913976 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-556m5" Nov 25 08:41:26 crc kubenswrapper[5043]: I1125 08:41:26.914062 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-556m5" Nov 25 08:41:26 crc kubenswrapper[5043]: I1125 08:41:26.987923 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-556m5" Nov 25 08:41:27 crc kubenswrapper[5043]: I1125 08:41:27.806407 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-556m5" Nov 25 08:41:28 crc kubenswrapper[5043]: I1125 08:41:28.370678 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kvswk" Nov 25 08:41:28 crc kubenswrapper[5043]: I1125 08:41:28.370974 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kvswk" Nov 25 08:41:28 crc kubenswrapper[5043]: I1125 08:41:28.428518 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kvswk" Nov 25 08:41:28 crc kubenswrapper[5043]: I1125 08:41:28.730235 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kvswk" Nov 25 08:41:29 crc kubenswrapper[5043]: I1125 08:41:29.260282 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6wq9s" Nov 25 08:41:29 crc kubenswrapper[5043]: I1125 08:41:29.260625 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6wq9s" Nov 25 08:41:30 crc kubenswrapper[5043]: I1125 08:41:30.621626 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6wq9s" podUID="83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0" containerName="registry-server" probeResult="failure" output=< Nov 25 08:41:30 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 08:41:30 crc kubenswrapper[5043]: > Nov 25 08:41:35 crc kubenswrapper[5043]: I1125 08:41:35.962748 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:41:35 crc kubenswrapper[5043]: E1125 08:41:35.963469 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:41:39 crc kubenswrapper[5043]: I1125 08:41:39.314253 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6wq9s" Nov 25 08:41:39 crc kubenswrapper[5043]: I1125 08:41:39.380126 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6wq9s" Nov 25 08:41:39 crc kubenswrapper[5043]: I1125 08:41:39.546116 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6wq9s"] Nov 25 08:41:39 crc kubenswrapper[5043]: I1125 08:41:39.752985 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kvswk"] Nov 25 08:41:39 crc kubenswrapper[5043]: I1125 08:41:39.754390 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kvswk" podUID="b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21" containerName="registry-server" containerID="cri-o://34f467f77d7755fe897720cb95e9d8f527bb1e56f7bdb98174419e2b6457b861" gracePeriod=2 Nov 25 08:41:39 crc kubenswrapper[5043]: I1125 08:41:39.949689 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-556m5"] Nov 25 08:41:39 crc kubenswrapper[5043]: I1125 08:41:39.949980 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-556m5" podUID="a92480af-246f-4335-a619-2a0f2aba7934" containerName="registry-server" containerID="cri-o://12365b8376e4c0ea3eddbf04881e8e4f56c13d32f160aac00ecce1f25224a361" gracePeriod=2 Nov 25 08:41:40 crc kubenswrapper[5043]: I1125 08:41:40.156561 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6kbt7"] Nov 25 08:41:40 crc kubenswrapper[5043]: I1125 08:41:40.157190 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6kbt7" podUID="30d87bfd-b6ac-45f1-b9c5-e39453d0eca9" containerName="registry-server" containerID="cri-o://3c17c42b63d4595052e653abddbd6dea8b42bad5d979fd3dbe59d4826a907dd3" gracePeriod=2 Nov 25 08:41:40 crc kubenswrapper[5043]: I1125 08:41:40.356263 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2nqf"] Nov 25 08:41:40 crc kubenswrapper[5043]: I1125 08:41:40.356467 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d2nqf" podUID="e2cc188c-3a39-4866-85dd-d34de56fb8f6" containerName="registry-server" containerID="cri-o://c7019c5300df24961fd1d5f5d1935d23d1d7d598bd1bc7efd63bef93559e0f4c" gracePeriod=2 Nov 25 08:41:40 crc kubenswrapper[5043]: I1125 08:41:40.783911 5043 generic.go:334] "Generic (PLEG): container finished" podID="e2cc188c-3a39-4866-85dd-d34de56fb8f6" containerID="c7019c5300df24961fd1d5f5d1935d23d1d7d598bd1bc7efd63bef93559e0f4c" exitCode=0 Nov 25 08:41:40 crc kubenswrapper[5043]: I1125 08:41:40.784288 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2nqf" event={"ID":"e2cc188c-3a39-4866-85dd-d34de56fb8f6","Type":"ContainerDied","Data":"c7019c5300df24961fd1d5f5d1935d23d1d7d598bd1bc7efd63bef93559e0f4c"} Nov 25 08:41:40 crc kubenswrapper[5043]: I1125 08:41:40.786668 5043 generic.go:334] "Generic (PLEG): container finished" podID="b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21" containerID="34f467f77d7755fe897720cb95e9d8f527bb1e56f7bdb98174419e2b6457b861" exitCode=0 Nov 25 08:41:40 crc kubenswrapper[5043]: I1125 08:41:40.786749 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvswk" event={"ID":"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21","Type":"ContainerDied","Data":"34f467f77d7755fe897720cb95e9d8f527bb1e56f7bdb98174419e2b6457b861"} Nov 25 08:41:40 crc kubenswrapper[5043]: I1125 08:41:40.786779 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvswk" event={"ID":"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21","Type":"ContainerDied","Data":"30c9c61260b4c8bac4a71212f120d2f2fd5ace4d0a6a7b4196a28d564cc2c198"} Nov 25 08:41:40 crc kubenswrapper[5043]: I1125 08:41:40.786794 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30c9c61260b4c8bac4a71212f120d2f2fd5ace4d0a6a7b4196a28d564cc2c198" Nov 25 08:41:40 crc kubenswrapper[5043]: I1125 08:41:40.789675 5043 generic.go:334] "Generic (PLEG): container finished" podID="a92480af-246f-4335-a619-2a0f2aba7934" containerID="12365b8376e4c0ea3eddbf04881e8e4f56c13d32f160aac00ecce1f25224a361" exitCode=0 Nov 25 08:41:40 crc kubenswrapper[5043]: I1125 08:41:40.789786 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-556m5" event={"ID":"a92480af-246f-4335-a619-2a0f2aba7934","Type":"ContainerDied","Data":"12365b8376e4c0ea3eddbf04881e8e4f56c13d32f160aac00ecce1f25224a361"} Nov 25 08:41:40 crc kubenswrapper[5043]: I1125 08:41:40.794642 5043 generic.go:334] "Generic (PLEG): container finished" podID="30d87bfd-b6ac-45f1-b9c5-e39453d0eca9" containerID="3c17c42b63d4595052e653abddbd6dea8b42bad5d979fd3dbe59d4826a907dd3" exitCode=0 Nov 25 08:41:40 crc kubenswrapper[5043]: I1125 08:41:40.794883 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6wq9s" podUID="83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0" containerName="registry-server" containerID="cri-o://a58d27d65187dc17d99b1374cbd722fd3681950da4f02e65f8740ddae333332b" gracePeriod=2 Nov 25 08:41:40 crc kubenswrapper[5043]: I1125 08:41:40.795193 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kbt7" event={"ID":"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9","Type":"ContainerDied","Data":"3c17c42b63d4595052e653abddbd6dea8b42bad5d979fd3dbe59d4826a907dd3"} Nov 25 08:41:40 crc kubenswrapper[5043]: I1125 08:41:40.846263 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvswk" Nov 25 08:41:40 crc kubenswrapper[5043]: I1125 08:41:40.861146 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-556m5" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.002486 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-catalog-content\") pod \"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21\" (UID: \"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21\") " Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.002888 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn88p\" (UniqueName: \"kubernetes.io/projected/a92480af-246f-4335-a619-2a0f2aba7934-kube-api-access-gn88p\") pod \"a92480af-246f-4335-a619-2a0f2aba7934\" (UID: \"a92480af-246f-4335-a619-2a0f2aba7934\") " Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.002955 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-utilities\") pod \"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21\" (UID: \"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21\") " Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.003047 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92480af-246f-4335-a619-2a0f2aba7934-catalog-content\") pod \"a92480af-246f-4335-a619-2a0f2aba7934\" (UID: \"a92480af-246f-4335-a619-2a0f2aba7934\") " Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.003082 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92480af-246f-4335-a619-2a0f2aba7934-utilities\") pod \"a92480af-246f-4335-a619-2a0f2aba7934\" (UID: \"a92480af-246f-4335-a619-2a0f2aba7934\") " Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.003136 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgwgj\" (UniqueName: \"kubernetes.io/projected/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-kube-api-access-rgwgj\") pod \"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21\" (UID: \"b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21\") " Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.003901 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-utilities" (OuterVolumeSpecName: "utilities") pod "b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21" (UID: "b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.005797 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a92480af-246f-4335-a619-2a0f2aba7934-utilities" (OuterVolumeSpecName: "utilities") pod "a92480af-246f-4335-a619-2a0f2aba7934" (UID: "a92480af-246f-4335-a619-2a0f2aba7934"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.012367 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-kube-api-access-rgwgj" (OuterVolumeSpecName: "kube-api-access-rgwgj") pod "b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21" (UID: "b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21"). InnerVolumeSpecName "kube-api-access-rgwgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.017975 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a92480af-246f-4335-a619-2a0f2aba7934-kube-api-access-gn88p" (OuterVolumeSpecName: "kube-api-access-gn88p") pod "a92480af-246f-4335-a619-2a0f2aba7934" (UID: "a92480af-246f-4335-a619-2a0f2aba7934"). InnerVolumeSpecName "kube-api-access-gn88p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.064358 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21" (UID: "b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.072265 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a92480af-246f-4335-a619-2a0f2aba7934-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a92480af-246f-4335-a619-2a0f2aba7934" (UID: "a92480af-246f-4335-a619-2a0f2aba7934"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.109935 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a92480af-246f-4335-a619-2a0f2aba7934-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.109962 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a92480af-246f-4335-a619-2a0f2aba7934-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.109972 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgwgj\" (UniqueName: \"kubernetes.io/projected/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-kube-api-access-rgwgj\") on node \"crc\" DevicePath \"\"" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.109982 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.109992 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn88p\" (UniqueName: \"kubernetes.io/projected/a92480af-246f-4335-a619-2a0f2aba7934-kube-api-access-gn88p\") on node \"crc\" DevicePath \"\"" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.110000 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.122976 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kbt7" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.211075 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-catalog-content\") pod \"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9\" (UID: \"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9\") " Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.211406 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6mv4\" (UniqueName: \"kubernetes.io/projected/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-kube-api-access-b6mv4\") pod \"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9\" (UID: \"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9\") " Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.211838 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-utilities\") pod \"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9\" (UID: \"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9\") " Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.212540 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-utilities" (OuterVolumeSpecName: "utilities") pod "30d87bfd-b6ac-45f1-b9c5-e39453d0eca9" (UID: "30d87bfd-b6ac-45f1-b9c5-e39453d0eca9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.212684 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.227328 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-kube-api-access-b6mv4" (OuterVolumeSpecName: "kube-api-access-b6mv4") pod "30d87bfd-b6ac-45f1-b9c5-e39453d0eca9" (UID: "30d87bfd-b6ac-45f1-b9c5-e39453d0eca9"). InnerVolumeSpecName "kube-api-access-b6mv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.280120 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30d87bfd-b6ac-45f1-b9c5-e39453d0eca9" (UID: "30d87bfd-b6ac-45f1-b9c5-e39453d0eca9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.315120 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6mv4\" (UniqueName: \"kubernetes.io/projected/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-kube-api-access-b6mv4\") on node \"crc\" DevicePath \"\"" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.315159 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.325497 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2nqf" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.416634 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk5tt\" (UniqueName: \"kubernetes.io/projected/e2cc188c-3a39-4866-85dd-d34de56fb8f6-kube-api-access-tk5tt\") pod \"e2cc188c-3a39-4866-85dd-d34de56fb8f6\" (UID: \"e2cc188c-3a39-4866-85dd-d34de56fb8f6\") " Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.416717 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2cc188c-3a39-4866-85dd-d34de56fb8f6-utilities\") pod \"e2cc188c-3a39-4866-85dd-d34de56fb8f6\" (UID: \"e2cc188c-3a39-4866-85dd-d34de56fb8f6\") " Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.416901 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2cc188c-3a39-4866-85dd-d34de56fb8f6-catalog-content\") pod \"e2cc188c-3a39-4866-85dd-d34de56fb8f6\" (UID: \"e2cc188c-3a39-4866-85dd-d34de56fb8f6\") " Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.418276 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2cc188c-3a39-4866-85dd-d34de56fb8f6-utilities" (OuterVolumeSpecName: "utilities") pod "e2cc188c-3a39-4866-85dd-d34de56fb8f6" (UID: "e2cc188c-3a39-4866-85dd-d34de56fb8f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.420794 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2cc188c-3a39-4866-85dd-d34de56fb8f6-kube-api-access-tk5tt" (OuterVolumeSpecName: "kube-api-access-tk5tt") pod "e2cc188c-3a39-4866-85dd-d34de56fb8f6" (UID: "e2cc188c-3a39-4866-85dd-d34de56fb8f6"). InnerVolumeSpecName "kube-api-access-tk5tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.465403 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6wq9s" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.466841 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2cc188c-3a39-4866-85dd-d34de56fb8f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2cc188c-3a39-4866-85dd-d34de56fb8f6" (UID: "e2cc188c-3a39-4866-85dd-d34de56fb8f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.518102 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-catalog-content\") pod \"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0\" (UID: \"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0\") " Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.518234 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv9gb\" (UniqueName: \"kubernetes.io/projected/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-kube-api-access-tv9gb\") pod \"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0\" (UID: \"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0\") " Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.518269 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-utilities\") pod \"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0\" (UID: \"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0\") " Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.518650 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk5tt\" (UniqueName: \"kubernetes.io/projected/e2cc188c-3a39-4866-85dd-d34de56fb8f6-kube-api-access-tk5tt\") on node \"crc\" DevicePath \"\"" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.518670 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2cc188c-3a39-4866-85dd-d34de56fb8f6-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.518678 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2cc188c-3a39-4866-85dd-d34de56fb8f6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.519791 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-utilities" (OuterVolumeSpecName: "utilities") pod "83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0" (UID: "83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.529143 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-kube-api-access-tv9gb" (OuterVolumeSpecName: "kube-api-access-tv9gb") pod "83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0" (UID: "83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0"). InnerVolumeSpecName "kube-api-access-tv9gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.573841 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0" (UID: "83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.619933 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.619982 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv9gb\" (UniqueName: \"kubernetes.io/projected/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-kube-api-access-tv9gb\") on node \"crc\" DevicePath \"\"" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.619995 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.805169 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kbt7" event={"ID":"30d87bfd-b6ac-45f1-b9c5-e39453d0eca9","Type":"ContainerDied","Data":"8678af2e0a130abfbc3db3d0807dffb4a502e9b8d8ed10f80bfa8568036a2039"} Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.805229 5043 scope.go:117] "RemoveContainer" containerID="3c17c42b63d4595052e653abddbd6dea8b42bad5d979fd3dbe59d4826a907dd3" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.805382 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kbt7" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.810282 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2nqf" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.810661 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2nqf" event={"ID":"e2cc188c-3a39-4866-85dd-d34de56fb8f6","Type":"ContainerDied","Data":"a7c42d89a82e13fd5544c142602c9d738ae1336a0af1986dc2ce0adb5646873c"} Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.816150 5043 generic.go:334] "Generic (PLEG): container finished" podID="83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0" containerID="a58d27d65187dc17d99b1374cbd722fd3681950da4f02e65f8740ddae333332b" exitCode=0 Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.816216 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6wq9s" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.816238 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6wq9s" event={"ID":"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0","Type":"ContainerDied","Data":"a58d27d65187dc17d99b1374cbd722fd3681950da4f02e65f8740ddae333332b"} Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.816592 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6wq9s" event={"ID":"83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0","Type":"ContainerDied","Data":"8ac2fb22969b31933f29ff31a611cf55d92e9686d0f56a916d3214ebd2e5b816"} Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.819851 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvswk" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.820085 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-556m5" event={"ID":"a92480af-246f-4335-a619-2a0f2aba7934","Type":"ContainerDied","Data":"c2188fd603c517bf4cfd87e4b7b897d25862ecf5ba6c0d167f714620fe1b0347"} Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.820113 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-556m5" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.837424 5043 scope.go:117] "RemoveContainer" containerID="46aeeaa098b89509b649c1fb7cafdb21c657cee32604cff52878a460c3b1024b" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.848072 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6kbt7"] Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.861918 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6kbt7"] Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.875747 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6wq9s"] Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.876006 5043 scope.go:117] "RemoveContainer" containerID="82d9b8ace34401784056ae17a630206c3d7e0ffcedeff60b9d34a8de98977cef" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.894580 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6wq9s"] Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.895323 5043 scope.go:117] "RemoveContainer" containerID="c7019c5300df24961fd1d5f5d1935d23d1d7d598bd1bc7efd63bef93559e0f4c" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.910100 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2nqf"] Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.917260 5043 scope.go:117] "RemoveContainer" containerID="59cedd31bc371a7d8b246a1cb7a03c2c52a7f2c1cff07feebc93605c54bfb977" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.924177 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d2nqf"] Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.939391 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kvswk"] Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.940367 5043 scope.go:117] "RemoveContainer" containerID="e2ae7fb94e96eaa0b191a336db2ab17cd04b0fed191c7b920a553c35b709dd96" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.947002 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kvswk"] Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.956971 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-556m5"] Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.963269 5043 scope.go:117] "RemoveContainer" containerID="a58d27d65187dc17d99b1374cbd722fd3681950da4f02e65f8740ddae333332b" Nov 25 08:41:41 crc kubenswrapper[5043]: I1125 08:41:41.964537 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-556m5"] Nov 25 08:41:42 crc kubenswrapper[5043]: I1125 08:41:42.043516 5043 scope.go:117] "RemoveContainer" containerID="7e2e50d23dcc8b5bf7e5b3260b787201de95f917478e8d0681aa872d11e061ff" Nov 25 08:41:42 crc kubenswrapper[5043]: I1125 08:41:42.065162 5043 scope.go:117] "RemoveContainer" containerID="1e0e29859339574478162881ed7b58cfb8e40c91a1ca3d0a4d57e32e2f490884" Nov 25 08:41:42 crc kubenswrapper[5043]: I1125 08:41:42.125284 5043 scope.go:117] "RemoveContainer" containerID="a58d27d65187dc17d99b1374cbd722fd3681950da4f02e65f8740ddae333332b" Nov 25 08:41:42 crc kubenswrapper[5043]: E1125 08:41:42.125923 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a58d27d65187dc17d99b1374cbd722fd3681950da4f02e65f8740ddae333332b\": container with ID starting with a58d27d65187dc17d99b1374cbd722fd3681950da4f02e65f8740ddae333332b not found: ID does not exist" containerID="a58d27d65187dc17d99b1374cbd722fd3681950da4f02e65f8740ddae333332b" Nov 25 08:41:42 crc kubenswrapper[5043]: I1125 08:41:42.125972 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a58d27d65187dc17d99b1374cbd722fd3681950da4f02e65f8740ddae333332b"} err="failed to get container status \"a58d27d65187dc17d99b1374cbd722fd3681950da4f02e65f8740ddae333332b\": rpc error: code = NotFound desc = could not find container \"a58d27d65187dc17d99b1374cbd722fd3681950da4f02e65f8740ddae333332b\": container with ID starting with a58d27d65187dc17d99b1374cbd722fd3681950da4f02e65f8740ddae333332b not found: ID does not exist" Nov 25 08:41:42 crc kubenswrapper[5043]: I1125 08:41:42.126003 5043 scope.go:117] "RemoveContainer" containerID="7e2e50d23dcc8b5bf7e5b3260b787201de95f917478e8d0681aa872d11e061ff" Nov 25 08:41:42 crc kubenswrapper[5043]: E1125 08:41:42.126374 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e2e50d23dcc8b5bf7e5b3260b787201de95f917478e8d0681aa872d11e061ff\": container with ID starting with 7e2e50d23dcc8b5bf7e5b3260b787201de95f917478e8d0681aa872d11e061ff not found: ID does not exist" containerID="7e2e50d23dcc8b5bf7e5b3260b787201de95f917478e8d0681aa872d11e061ff" Nov 25 08:41:42 crc kubenswrapper[5043]: I1125 08:41:42.126404 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e2e50d23dcc8b5bf7e5b3260b787201de95f917478e8d0681aa872d11e061ff"} err="failed to get container status \"7e2e50d23dcc8b5bf7e5b3260b787201de95f917478e8d0681aa872d11e061ff\": rpc error: code = NotFound desc = could not find container \"7e2e50d23dcc8b5bf7e5b3260b787201de95f917478e8d0681aa872d11e061ff\": container with ID starting with 7e2e50d23dcc8b5bf7e5b3260b787201de95f917478e8d0681aa872d11e061ff not found: ID does not exist" Nov 25 08:41:42 crc kubenswrapper[5043]: I1125 08:41:42.126426 5043 scope.go:117] "RemoveContainer" containerID="1e0e29859339574478162881ed7b58cfb8e40c91a1ca3d0a4d57e32e2f490884" Nov 25 08:41:42 crc kubenswrapper[5043]: E1125 08:41:42.126820 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e0e29859339574478162881ed7b58cfb8e40c91a1ca3d0a4d57e32e2f490884\": container with ID starting with 1e0e29859339574478162881ed7b58cfb8e40c91a1ca3d0a4d57e32e2f490884 not found: ID does not exist" containerID="1e0e29859339574478162881ed7b58cfb8e40c91a1ca3d0a4d57e32e2f490884" Nov 25 08:41:42 crc kubenswrapper[5043]: I1125 08:41:42.126849 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e0e29859339574478162881ed7b58cfb8e40c91a1ca3d0a4d57e32e2f490884"} err="failed to get container status \"1e0e29859339574478162881ed7b58cfb8e40c91a1ca3d0a4d57e32e2f490884\": rpc error: code = NotFound desc = could not find container \"1e0e29859339574478162881ed7b58cfb8e40c91a1ca3d0a4d57e32e2f490884\": container with ID starting with 1e0e29859339574478162881ed7b58cfb8e40c91a1ca3d0a4d57e32e2f490884 not found: ID does not exist" Nov 25 08:41:42 crc kubenswrapper[5043]: I1125 08:41:42.126863 5043 scope.go:117] "RemoveContainer" containerID="12365b8376e4c0ea3eddbf04881e8e4f56c13d32f160aac00ecce1f25224a361" Nov 25 08:41:42 crc kubenswrapper[5043]: I1125 08:41:42.172889 5043 scope.go:117] "RemoveContainer" containerID="224b0b32e90e96fbb6944502a36d2a94e38186a0a13eea80eded80dc3c35c4e2" Nov 25 08:41:42 crc kubenswrapper[5043]: I1125 08:41:42.200270 5043 scope.go:117] "RemoveContainer" containerID="176b603b8f9e87a9a27068ed45087b7da0cd0dd62fdc2523fb59e2945a1250ff" Nov 25 08:41:42 crc kubenswrapper[5043]: I1125 08:41:42.973385 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d87bfd-b6ac-45f1-b9c5-e39453d0eca9" path="/var/lib/kubelet/pods/30d87bfd-b6ac-45f1-b9c5-e39453d0eca9/volumes" Nov 25 08:41:42 crc kubenswrapper[5043]: I1125 08:41:42.974355 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0" path="/var/lib/kubelet/pods/83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0/volumes" Nov 25 08:41:42 crc kubenswrapper[5043]: I1125 08:41:42.974998 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a92480af-246f-4335-a619-2a0f2aba7934" path="/var/lib/kubelet/pods/a92480af-246f-4335-a619-2a0f2aba7934/volumes" Nov 25 08:41:42 crc kubenswrapper[5043]: I1125 08:41:42.976098 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21" path="/var/lib/kubelet/pods/b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21/volumes" Nov 25 08:41:42 crc kubenswrapper[5043]: I1125 08:41:42.976690 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2cc188c-3a39-4866-85dd-d34de56fb8f6" path="/var/lib/kubelet/pods/e2cc188c-3a39-4866-85dd-d34de56fb8f6/volumes" Nov 25 08:41:46 crc kubenswrapper[5043]: I1125 08:41:46.972624 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:41:46 crc kubenswrapper[5043]: E1125 08:41:46.974080 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:41:57 crc kubenswrapper[5043]: I1125 08:41:57.962885 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:41:57 crc kubenswrapper[5043]: E1125 08:41:57.963722 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:42:10 crc kubenswrapper[5043]: I1125 08:42:10.963447 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:42:10 crc kubenswrapper[5043]: E1125 08:42:10.964215 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:42:21 crc kubenswrapper[5043]: I1125 08:42:21.963553 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:42:21 crc kubenswrapper[5043]: E1125 08:42:21.964268 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:42:32 crc kubenswrapper[5043]: I1125 08:42:32.963264 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:42:32 crc kubenswrapper[5043]: E1125 08:42:32.964081 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:42:47 crc kubenswrapper[5043]: I1125 08:42:47.963977 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:42:47 crc kubenswrapper[5043]: E1125 08:42:47.965758 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:43:01 crc kubenswrapper[5043]: I1125 08:43:01.962472 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:43:01 crc kubenswrapper[5043]: E1125 08:43:01.963210 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:43:15 crc kubenswrapper[5043]: I1125 08:43:15.962884 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:43:15 crc kubenswrapper[5043]: E1125 08:43:15.963689 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.057414 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8xlg2"] Nov 25 08:43:24 crc kubenswrapper[5043]: E1125 08:43:24.058360 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0" containerName="extract-utilities" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058383 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0" containerName="extract-utilities" Nov 25 08:43:24 crc kubenswrapper[5043]: E1125 08:43:24.058396 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d87bfd-b6ac-45f1-b9c5-e39453d0eca9" containerName="extract-utilities" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058402 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d87bfd-b6ac-45f1-b9c5-e39453d0eca9" containerName="extract-utilities" Nov 25 08:43:24 crc kubenswrapper[5043]: E1125 08:43:24.058419 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21" containerName="extract-utilities" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058425 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21" containerName="extract-utilities" Nov 25 08:43:24 crc kubenswrapper[5043]: E1125 08:43:24.058438 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0" containerName="registry-server" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058444 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0" containerName="registry-server" Nov 25 08:43:24 crc kubenswrapper[5043]: E1125 08:43:24.058454 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0" containerName="extract-content" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058461 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0" containerName="extract-content" Nov 25 08:43:24 crc kubenswrapper[5043]: E1125 08:43:24.058471 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d87bfd-b6ac-45f1-b9c5-e39453d0eca9" containerName="registry-server" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058478 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d87bfd-b6ac-45f1-b9c5-e39453d0eca9" containerName="registry-server" Nov 25 08:43:24 crc kubenswrapper[5043]: E1125 08:43:24.058505 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2cc188c-3a39-4866-85dd-d34de56fb8f6" containerName="extract-content" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058511 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2cc188c-3a39-4866-85dd-d34de56fb8f6" containerName="extract-content" Nov 25 08:43:24 crc kubenswrapper[5043]: E1125 08:43:24.058520 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92480af-246f-4335-a619-2a0f2aba7934" containerName="extract-utilities" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058525 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92480af-246f-4335-a619-2a0f2aba7934" containerName="extract-utilities" Nov 25 08:43:24 crc kubenswrapper[5043]: E1125 08:43:24.058548 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d87bfd-b6ac-45f1-b9c5-e39453d0eca9" containerName="extract-content" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058554 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d87bfd-b6ac-45f1-b9c5-e39453d0eca9" containerName="extract-content" Nov 25 08:43:24 crc kubenswrapper[5043]: E1125 08:43:24.058565 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92480af-246f-4335-a619-2a0f2aba7934" containerName="extract-content" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058571 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92480af-246f-4335-a619-2a0f2aba7934" containerName="extract-content" Nov 25 08:43:24 crc kubenswrapper[5043]: E1125 08:43:24.058581 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21" containerName="extract-content" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058588 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21" containerName="extract-content" Nov 25 08:43:24 crc kubenswrapper[5043]: E1125 08:43:24.058601 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21" containerName="registry-server" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058612 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21" containerName="registry-server" Nov 25 08:43:24 crc kubenswrapper[5043]: E1125 08:43:24.058651 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2cc188c-3a39-4866-85dd-d34de56fb8f6" containerName="registry-server" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058663 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2cc188c-3a39-4866-85dd-d34de56fb8f6" containerName="registry-server" Nov 25 08:43:24 crc kubenswrapper[5043]: E1125 08:43:24.058674 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2cc188c-3a39-4866-85dd-d34de56fb8f6" containerName="extract-utilities" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058680 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2cc188c-3a39-4866-85dd-d34de56fb8f6" containerName="extract-utilities" Nov 25 08:43:24 crc kubenswrapper[5043]: E1125 08:43:24.058690 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92480af-246f-4335-a619-2a0f2aba7934" containerName="registry-server" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058697 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92480af-246f-4335-a619-2a0f2aba7934" containerName="registry-server" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058868 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="83f6e3a6-bb5e-4378-aa54-ddf6fb0514b0" containerName="registry-server" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058883 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2cc188c-3a39-4866-85dd-d34de56fb8f6" containerName="registry-server" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058898 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d87bfd-b6ac-45f1-b9c5-e39453d0eca9" containerName="registry-server" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058907 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="a92480af-246f-4335-a619-2a0f2aba7934" containerName="registry-server" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.058924 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2857b8f-3396-4a7e-8cd5-c88e0f4ffa21" containerName="registry-server" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.060329 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8xlg2" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.074662 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8xlg2"] Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.200043 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c7m4\" (UniqueName: \"kubernetes.io/projected/dcd3046f-d71b-4685-b76f-c23f2507b6e7-kube-api-access-5c7m4\") pod \"redhat-operators-8xlg2\" (UID: \"dcd3046f-d71b-4685-b76f-c23f2507b6e7\") " pod="openshift-marketplace/redhat-operators-8xlg2" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.200100 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd3046f-d71b-4685-b76f-c23f2507b6e7-catalog-content\") pod \"redhat-operators-8xlg2\" (UID: \"dcd3046f-d71b-4685-b76f-c23f2507b6e7\") " pod="openshift-marketplace/redhat-operators-8xlg2" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.200222 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd3046f-d71b-4685-b76f-c23f2507b6e7-utilities\") pod \"redhat-operators-8xlg2\" (UID: \"dcd3046f-d71b-4685-b76f-c23f2507b6e7\") " pod="openshift-marketplace/redhat-operators-8xlg2" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.301771 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd3046f-d71b-4685-b76f-c23f2507b6e7-utilities\") pod \"redhat-operators-8xlg2\" (UID: \"dcd3046f-d71b-4685-b76f-c23f2507b6e7\") " pod="openshift-marketplace/redhat-operators-8xlg2" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.302341 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c7m4\" (UniqueName: \"kubernetes.io/projected/dcd3046f-d71b-4685-b76f-c23f2507b6e7-kube-api-access-5c7m4\") pod \"redhat-operators-8xlg2\" (UID: \"dcd3046f-d71b-4685-b76f-c23f2507b6e7\") " pod="openshift-marketplace/redhat-operators-8xlg2" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.302370 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd3046f-d71b-4685-b76f-c23f2507b6e7-catalog-content\") pod \"redhat-operators-8xlg2\" (UID: \"dcd3046f-d71b-4685-b76f-c23f2507b6e7\") " pod="openshift-marketplace/redhat-operators-8xlg2" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.302733 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd3046f-d71b-4685-b76f-c23f2507b6e7-utilities\") pod \"redhat-operators-8xlg2\" (UID: \"dcd3046f-d71b-4685-b76f-c23f2507b6e7\") " pod="openshift-marketplace/redhat-operators-8xlg2" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.302872 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd3046f-d71b-4685-b76f-c23f2507b6e7-catalog-content\") pod \"redhat-operators-8xlg2\" (UID: \"dcd3046f-d71b-4685-b76f-c23f2507b6e7\") " pod="openshift-marketplace/redhat-operators-8xlg2" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.328542 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c7m4\" (UniqueName: \"kubernetes.io/projected/dcd3046f-d71b-4685-b76f-c23f2507b6e7-kube-api-access-5c7m4\") pod \"redhat-operators-8xlg2\" (UID: \"dcd3046f-d71b-4685-b76f-c23f2507b6e7\") " pod="openshift-marketplace/redhat-operators-8xlg2" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.382045 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8xlg2" Nov 25 08:43:24 crc kubenswrapper[5043]: I1125 08:43:24.860597 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8xlg2"] Nov 25 08:43:25 crc kubenswrapper[5043]: I1125 08:43:25.773665 5043 generic.go:334] "Generic (PLEG): container finished" podID="dcd3046f-d71b-4685-b76f-c23f2507b6e7" containerID="e2a52f80dbf06b61fbdb689119734d0c77f75052f86b1da34c5cd1c5e183d041" exitCode=0 Nov 25 08:43:25 crc kubenswrapper[5043]: I1125 08:43:25.773747 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xlg2" event={"ID":"dcd3046f-d71b-4685-b76f-c23f2507b6e7","Type":"ContainerDied","Data":"e2a52f80dbf06b61fbdb689119734d0c77f75052f86b1da34c5cd1c5e183d041"} Nov 25 08:43:25 crc kubenswrapper[5043]: I1125 08:43:25.774168 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xlg2" event={"ID":"dcd3046f-d71b-4685-b76f-c23f2507b6e7","Type":"ContainerStarted","Data":"5164453230fa3acaa29ab21a799a32a77a2313f9e5f6a6f0a80071b29c83b796"} Nov 25 08:43:25 crc kubenswrapper[5043]: I1125 08:43:25.776725 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:43:26 crc kubenswrapper[5043]: I1125 08:43:26.976535 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:43:26 crc kubenswrapper[5043]: E1125 08:43:26.977107 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:43:27 crc kubenswrapper[5043]: I1125 08:43:27.793366 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xlg2" event={"ID":"dcd3046f-d71b-4685-b76f-c23f2507b6e7","Type":"ContainerStarted","Data":"0f2bebca9a709a3ea77f056cc0d0c114df2e2cfb9713f9517a28271863e11995"} Nov 25 08:43:31 crc kubenswrapper[5043]: I1125 08:43:31.836940 5043 generic.go:334] "Generic (PLEG): container finished" podID="dcd3046f-d71b-4685-b76f-c23f2507b6e7" containerID="0f2bebca9a709a3ea77f056cc0d0c114df2e2cfb9713f9517a28271863e11995" exitCode=0 Nov 25 08:43:31 crc kubenswrapper[5043]: I1125 08:43:31.836997 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xlg2" event={"ID":"dcd3046f-d71b-4685-b76f-c23f2507b6e7","Type":"ContainerDied","Data":"0f2bebca9a709a3ea77f056cc0d0c114df2e2cfb9713f9517a28271863e11995"} Nov 25 08:43:32 crc kubenswrapper[5043]: I1125 08:43:32.851896 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xlg2" event={"ID":"dcd3046f-d71b-4685-b76f-c23f2507b6e7","Type":"ContainerStarted","Data":"34ecbdaab359095493af2f57bb396e81c07aa0fae5dd5134c3f05a0d4c5bad97"} Nov 25 08:43:32 crc kubenswrapper[5043]: I1125 08:43:32.876068 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8xlg2" podStartSLOduration=2.418223811 podStartE2EDuration="8.876049299s" podCreationTimestamp="2025-11-25 08:43:24 +0000 UTC" firstStartedPulling="2025-11-25 08:43:25.776454526 +0000 UTC m=+5269.944650257" lastFinishedPulling="2025-11-25 08:43:32.234280034 +0000 UTC m=+5276.402475745" observedRunningTime="2025-11-25 08:43:32.873768177 +0000 UTC m=+5277.041963898" watchObservedRunningTime="2025-11-25 08:43:32.876049299 +0000 UTC m=+5277.044245020" Nov 25 08:43:34 crc kubenswrapper[5043]: I1125 08:43:34.382494 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8xlg2" Nov 25 08:43:34 crc kubenswrapper[5043]: I1125 08:43:34.382877 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8xlg2" Nov 25 08:43:35 crc kubenswrapper[5043]: I1125 08:43:35.432275 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8xlg2" podUID="dcd3046f-d71b-4685-b76f-c23f2507b6e7" containerName="registry-server" probeResult="failure" output=< Nov 25 08:43:35 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 08:43:35 crc kubenswrapper[5043]: > Nov 25 08:43:36 crc kubenswrapper[5043]: I1125 08:43:36.294261 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6sqfs"] Nov 25 08:43:36 crc kubenswrapper[5043]: I1125 08:43:36.296655 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6sqfs" Nov 25 08:43:36 crc kubenswrapper[5043]: I1125 08:43:36.327814 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6sqfs"] Nov 25 08:43:36 crc kubenswrapper[5043]: I1125 08:43:36.464895 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5441050-a90f-49f4-89a4-c40076857f5e-catalog-content\") pod \"community-operators-6sqfs\" (UID: \"c5441050-a90f-49f4-89a4-c40076857f5e\") " pod="openshift-marketplace/community-operators-6sqfs" Nov 25 08:43:36 crc kubenswrapper[5043]: I1125 08:43:36.464994 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5441050-a90f-49f4-89a4-c40076857f5e-utilities\") pod \"community-operators-6sqfs\" (UID: \"c5441050-a90f-49f4-89a4-c40076857f5e\") " pod="openshift-marketplace/community-operators-6sqfs" Nov 25 08:43:36 crc kubenswrapper[5043]: I1125 08:43:36.465154 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c85bq\" (UniqueName: \"kubernetes.io/projected/c5441050-a90f-49f4-89a4-c40076857f5e-kube-api-access-c85bq\") pod \"community-operators-6sqfs\" (UID: \"c5441050-a90f-49f4-89a4-c40076857f5e\") " pod="openshift-marketplace/community-operators-6sqfs" Nov 25 08:43:36 crc kubenswrapper[5043]: I1125 08:43:36.567385 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c85bq\" (UniqueName: \"kubernetes.io/projected/c5441050-a90f-49f4-89a4-c40076857f5e-kube-api-access-c85bq\") pod \"community-operators-6sqfs\" (UID: \"c5441050-a90f-49f4-89a4-c40076857f5e\") " pod="openshift-marketplace/community-operators-6sqfs" Nov 25 08:43:36 crc kubenswrapper[5043]: I1125 08:43:36.567542 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5441050-a90f-49f4-89a4-c40076857f5e-catalog-content\") pod \"community-operators-6sqfs\" (UID: \"c5441050-a90f-49f4-89a4-c40076857f5e\") " pod="openshift-marketplace/community-operators-6sqfs" Nov 25 08:43:36 crc kubenswrapper[5043]: I1125 08:43:36.567573 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5441050-a90f-49f4-89a4-c40076857f5e-utilities\") pod \"community-operators-6sqfs\" (UID: \"c5441050-a90f-49f4-89a4-c40076857f5e\") " pod="openshift-marketplace/community-operators-6sqfs" Nov 25 08:43:36 crc kubenswrapper[5043]: I1125 08:43:36.568061 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5441050-a90f-49f4-89a4-c40076857f5e-catalog-content\") pod \"community-operators-6sqfs\" (UID: \"c5441050-a90f-49f4-89a4-c40076857f5e\") " pod="openshift-marketplace/community-operators-6sqfs" Nov 25 08:43:36 crc kubenswrapper[5043]: I1125 08:43:36.568089 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5441050-a90f-49f4-89a4-c40076857f5e-utilities\") pod \"community-operators-6sqfs\" (UID: \"c5441050-a90f-49f4-89a4-c40076857f5e\") " pod="openshift-marketplace/community-operators-6sqfs" Nov 25 08:43:36 crc kubenswrapper[5043]: I1125 08:43:36.586231 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c85bq\" (UniqueName: \"kubernetes.io/projected/c5441050-a90f-49f4-89a4-c40076857f5e-kube-api-access-c85bq\") pod \"community-operators-6sqfs\" (UID: \"c5441050-a90f-49f4-89a4-c40076857f5e\") " pod="openshift-marketplace/community-operators-6sqfs" Nov 25 08:43:36 crc kubenswrapper[5043]: I1125 08:43:36.619444 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6sqfs" Nov 25 08:43:37 crc kubenswrapper[5043]: I1125 08:43:37.260902 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6sqfs"] Nov 25 08:43:37 crc kubenswrapper[5043]: W1125 08:43:37.767652 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5441050_a90f_49f4_89a4_c40076857f5e.slice/crio-fccf0c2a9b9720023e2943bd8385ad518cf939d0d5f5abfd9f93d71f4133d391 WatchSource:0}: Error finding container fccf0c2a9b9720023e2943bd8385ad518cf939d0d5f5abfd9f93d71f4133d391: Status 404 returned error can't find the container with id fccf0c2a9b9720023e2943bd8385ad518cf939d0d5f5abfd9f93d71f4133d391 Nov 25 08:43:37 crc kubenswrapper[5043]: I1125 08:43:37.964421 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:43:37 crc kubenswrapper[5043]: E1125 08:43:37.964649 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:43:37 crc kubenswrapper[5043]: I1125 08:43:37.975836 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6sqfs" event={"ID":"c5441050-a90f-49f4-89a4-c40076857f5e","Type":"ContainerStarted","Data":"6d17e72608d6c67f39e67769caf4dfe81ab38dfd30f97d9a4c9d8d96e14225f8"} Nov 25 08:43:37 crc kubenswrapper[5043]: I1125 08:43:37.976150 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6sqfs" event={"ID":"c5441050-a90f-49f4-89a4-c40076857f5e","Type":"ContainerStarted","Data":"fccf0c2a9b9720023e2943bd8385ad518cf939d0d5f5abfd9f93d71f4133d391"} Nov 25 08:43:38 crc kubenswrapper[5043]: I1125 08:43:38.985946 5043 generic.go:334] "Generic (PLEG): container finished" podID="c5441050-a90f-49f4-89a4-c40076857f5e" containerID="6d17e72608d6c67f39e67769caf4dfe81ab38dfd30f97d9a4c9d8d96e14225f8" exitCode=0 Nov 25 08:43:38 crc kubenswrapper[5043]: I1125 08:43:38.985991 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6sqfs" event={"ID":"c5441050-a90f-49f4-89a4-c40076857f5e","Type":"ContainerDied","Data":"6d17e72608d6c67f39e67769caf4dfe81ab38dfd30f97d9a4c9d8d96e14225f8"} Nov 25 08:43:44 crc kubenswrapper[5043]: I1125 08:43:44.042238 5043 generic.go:334] "Generic (PLEG): container finished" podID="c5441050-a90f-49f4-89a4-c40076857f5e" containerID="a08c4f4e94e2dbb4f0e779568e64f2c8970912bf0bad7a7e2aab6e5c7d3ca049" exitCode=0 Nov 25 08:43:44 crc kubenswrapper[5043]: I1125 08:43:44.042358 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6sqfs" event={"ID":"c5441050-a90f-49f4-89a4-c40076857f5e","Type":"ContainerDied","Data":"a08c4f4e94e2dbb4f0e779568e64f2c8970912bf0bad7a7e2aab6e5c7d3ca049"} Nov 25 08:43:44 crc kubenswrapper[5043]: I1125 08:43:44.429795 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8xlg2" Nov 25 08:43:44 crc kubenswrapper[5043]: I1125 08:43:44.483015 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8xlg2" Nov 25 08:43:45 crc kubenswrapper[5043]: I1125 08:43:45.063679 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6sqfs" event={"ID":"c5441050-a90f-49f4-89a4-c40076857f5e","Type":"ContainerStarted","Data":"39c35d4fd275dba48543b36bba26474eb72732905544b269158349626a1fb92e"} Nov 25 08:43:45 crc kubenswrapper[5043]: I1125 08:43:45.100295 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6sqfs" podStartSLOduration=3.57860964 podStartE2EDuration="9.10026416s" podCreationTimestamp="2025-11-25 08:43:36 +0000 UTC" firstStartedPulling="2025-11-25 08:43:38.988441606 +0000 UTC m=+5283.156637327" lastFinishedPulling="2025-11-25 08:43:44.510096116 +0000 UTC m=+5288.678291847" observedRunningTime="2025-11-25 08:43:45.090162176 +0000 UTC m=+5289.258357907" watchObservedRunningTime="2025-11-25 08:43:45.10026416 +0000 UTC m=+5289.268459891" Nov 25 08:43:45 crc kubenswrapper[5043]: I1125 08:43:45.293061 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8xlg2"] Nov 25 08:43:46 crc kubenswrapper[5043]: I1125 08:43:46.072619 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8xlg2" podUID="dcd3046f-d71b-4685-b76f-c23f2507b6e7" containerName="registry-server" containerID="cri-o://34ecbdaab359095493af2f57bb396e81c07aa0fae5dd5134c3f05a0d4c5bad97" gracePeriod=2 Nov 25 08:43:46 crc kubenswrapper[5043]: I1125 08:43:46.620331 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6sqfs" Nov 25 08:43:46 crc kubenswrapper[5043]: I1125 08:43:46.620388 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6sqfs" Nov 25 08:43:47 crc kubenswrapper[5043]: I1125 08:43:47.091593 5043 generic.go:334] "Generic (PLEG): container finished" podID="dcd3046f-d71b-4685-b76f-c23f2507b6e7" containerID="34ecbdaab359095493af2f57bb396e81c07aa0fae5dd5134c3f05a0d4c5bad97" exitCode=0 Nov 25 08:43:47 crc kubenswrapper[5043]: I1125 08:43:47.091631 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xlg2" event={"ID":"dcd3046f-d71b-4685-b76f-c23f2507b6e7","Type":"ContainerDied","Data":"34ecbdaab359095493af2f57bb396e81c07aa0fae5dd5134c3f05a0d4c5bad97"} Nov 25 08:43:47 crc kubenswrapper[5043]: I1125 08:43:47.559723 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8xlg2" Nov 25 08:43:47 crc kubenswrapper[5043]: I1125 08:43:47.681977 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6sqfs" podUID="c5441050-a90f-49f4-89a4-c40076857f5e" containerName="registry-server" probeResult="failure" output=< Nov 25 08:43:47 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 08:43:47 crc kubenswrapper[5043]: > Nov 25 08:43:47 crc kubenswrapper[5043]: I1125 08:43:47.720792 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd3046f-d71b-4685-b76f-c23f2507b6e7-catalog-content\") pod \"dcd3046f-d71b-4685-b76f-c23f2507b6e7\" (UID: \"dcd3046f-d71b-4685-b76f-c23f2507b6e7\") " Nov 25 08:43:47 crc kubenswrapper[5043]: I1125 08:43:47.720852 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd3046f-d71b-4685-b76f-c23f2507b6e7-utilities\") pod \"dcd3046f-d71b-4685-b76f-c23f2507b6e7\" (UID: \"dcd3046f-d71b-4685-b76f-c23f2507b6e7\") " Nov 25 08:43:47 crc kubenswrapper[5043]: I1125 08:43:47.720965 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c7m4\" (UniqueName: \"kubernetes.io/projected/dcd3046f-d71b-4685-b76f-c23f2507b6e7-kube-api-access-5c7m4\") pod \"dcd3046f-d71b-4685-b76f-c23f2507b6e7\" (UID: \"dcd3046f-d71b-4685-b76f-c23f2507b6e7\") " Nov 25 08:43:47 crc kubenswrapper[5043]: I1125 08:43:47.721839 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcd3046f-d71b-4685-b76f-c23f2507b6e7-utilities" (OuterVolumeSpecName: "utilities") pod "dcd3046f-d71b-4685-b76f-c23f2507b6e7" (UID: "dcd3046f-d71b-4685-b76f-c23f2507b6e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:43:47 crc kubenswrapper[5043]: I1125 08:43:47.726440 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd3046f-d71b-4685-b76f-c23f2507b6e7-kube-api-access-5c7m4" (OuterVolumeSpecName: "kube-api-access-5c7m4") pod "dcd3046f-d71b-4685-b76f-c23f2507b6e7" (UID: "dcd3046f-d71b-4685-b76f-c23f2507b6e7"). InnerVolumeSpecName "kube-api-access-5c7m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:43:47 crc kubenswrapper[5043]: I1125 08:43:47.804977 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcd3046f-d71b-4685-b76f-c23f2507b6e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcd3046f-d71b-4685-b76f-c23f2507b6e7" (UID: "dcd3046f-d71b-4685-b76f-c23f2507b6e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:43:47 crc kubenswrapper[5043]: I1125 08:43:47.823255 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c7m4\" (UniqueName: \"kubernetes.io/projected/dcd3046f-d71b-4685-b76f-c23f2507b6e7-kube-api-access-5c7m4\") on node \"crc\" DevicePath \"\"" Nov 25 08:43:47 crc kubenswrapper[5043]: I1125 08:43:47.823291 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd3046f-d71b-4685-b76f-c23f2507b6e7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:43:47 crc kubenswrapper[5043]: I1125 08:43:47.823300 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd3046f-d71b-4685-b76f-c23f2507b6e7-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:43:48 crc kubenswrapper[5043]: I1125 08:43:48.101547 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xlg2" event={"ID":"dcd3046f-d71b-4685-b76f-c23f2507b6e7","Type":"ContainerDied","Data":"5164453230fa3acaa29ab21a799a32a77a2313f9e5f6a6f0a80071b29c83b796"} Nov 25 08:43:48 crc kubenswrapper[5043]: I1125 08:43:48.102664 5043 scope.go:117] "RemoveContainer" containerID="34ecbdaab359095493af2f57bb396e81c07aa0fae5dd5134c3f05a0d4c5bad97" Nov 25 08:43:48 crc kubenswrapper[5043]: I1125 08:43:48.101619 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8xlg2" Nov 25 08:43:48 crc kubenswrapper[5043]: I1125 08:43:48.135773 5043 scope.go:117] "RemoveContainer" containerID="0f2bebca9a709a3ea77f056cc0d0c114df2e2cfb9713f9517a28271863e11995" Nov 25 08:43:48 crc kubenswrapper[5043]: I1125 08:43:48.144987 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8xlg2"] Nov 25 08:43:48 crc kubenswrapper[5043]: I1125 08:43:48.156555 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8xlg2"] Nov 25 08:43:48 crc kubenswrapper[5043]: I1125 08:43:48.376959 5043 scope.go:117] "RemoveContainer" containerID="e2a52f80dbf06b61fbdb689119734d0c77f75052f86b1da34c5cd1c5e183d041" Nov 25 08:43:48 crc kubenswrapper[5043]: I1125 08:43:48.973409 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd3046f-d71b-4685-b76f-c23f2507b6e7" path="/var/lib/kubelet/pods/dcd3046f-d71b-4685-b76f-c23f2507b6e7/volumes" Nov 25 08:43:49 crc kubenswrapper[5043]: I1125 08:43:49.962875 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:43:51 crc kubenswrapper[5043]: I1125 08:43:51.137368 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"745d5b04de7afbcb0082f79a586522f585ce0c903de97daa463145606eb3a578"} Nov 25 08:43:56 crc kubenswrapper[5043]: I1125 08:43:56.671404 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6sqfs" Nov 25 08:43:56 crc kubenswrapper[5043]: I1125 08:43:56.729169 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6sqfs" Nov 25 08:43:56 crc kubenswrapper[5043]: I1125 08:43:56.795577 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6sqfs"] Nov 25 08:43:56 crc kubenswrapper[5043]: I1125 08:43:56.912986 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2pjq"] Nov 25 08:43:56 crc kubenswrapper[5043]: I1125 08:43:56.913245 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w2pjq" podUID="8cac4e27-3572-4c74-86e1-4203f9404939" containerName="registry-server" containerID="cri-o://a61d58d499cf509491abd91451e874c60106243593e9542711bc6d563eece427" gracePeriod=2 Nov 25 08:43:57 crc kubenswrapper[5043]: I1125 08:43:57.211958 5043 generic.go:334] "Generic (PLEG): container finished" podID="8cac4e27-3572-4c74-86e1-4203f9404939" containerID="a61d58d499cf509491abd91451e874c60106243593e9542711bc6d563eece427" exitCode=0 Nov 25 08:43:57 crc kubenswrapper[5043]: I1125 08:43:57.212030 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2pjq" event={"ID":"8cac4e27-3572-4c74-86e1-4203f9404939","Type":"ContainerDied","Data":"a61d58d499cf509491abd91451e874c60106243593e9542711bc6d563eece427"} Nov 25 08:43:57 crc kubenswrapper[5043]: I1125 08:43:57.432745 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2pjq" Nov 25 08:43:57 crc kubenswrapper[5043]: I1125 08:43:57.623747 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cac4e27-3572-4c74-86e1-4203f9404939-utilities\") pod \"8cac4e27-3572-4c74-86e1-4203f9404939\" (UID: \"8cac4e27-3572-4c74-86e1-4203f9404939\") " Nov 25 08:43:57 crc kubenswrapper[5043]: I1125 08:43:57.623939 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cac4e27-3572-4c74-86e1-4203f9404939-catalog-content\") pod \"8cac4e27-3572-4c74-86e1-4203f9404939\" (UID: \"8cac4e27-3572-4c74-86e1-4203f9404939\") " Nov 25 08:43:57 crc kubenswrapper[5043]: I1125 08:43:57.624027 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgtwv\" (UniqueName: \"kubernetes.io/projected/8cac4e27-3572-4c74-86e1-4203f9404939-kube-api-access-vgtwv\") pod \"8cac4e27-3572-4c74-86e1-4203f9404939\" (UID: \"8cac4e27-3572-4c74-86e1-4203f9404939\") " Nov 25 08:43:57 crc kubenswrapper[5043]: I1125 08:43:57.627025 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cac4e27-3572-4c74-86e1-4203f9404939-utilities" (OuterVolumeSpecName: "utilities") pod "8cac4e27-3572-4c74-86e1-4203f9404939" (UID: "8cac4e27-3572-4c74-86e1-4203f9404939"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:43:57 crc kubenswrapper[5043]: I1125 08:43:57.631368 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cac4e27-3572-4c74-86e1-4203f9404939-kube-api-access-vgtwv" (OuterVolumeSpecName: "kube-api-access-vgtwv") pod "8cac4e27-3572-4c74-86e1-4203f9404939" (UID: "8cac4e27-3572-4c74-86e1-4203f9404939"). InnerVolumeSpecName "kube-api-access-vgtwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:43:57 crc kubenswrapper[5043]: I1125 08:43:57.681375 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cac4e27-3572-4c74-86e1-4203f9404939-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cac4e27-3572-4c74-86e1-4203f9404939" (UID: "8cac4e27-3572-4c74-86e1-4203f9404939"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:43:57 crc kubenswrapper[5043]: I1125 08:43:57.726453 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cac4e27-3572-4c74-86e1-4203f9404939-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:43:57 crc kubenswrapper[5043]: I1125 08:43:57.726728 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cac4e27-3572-4c74-86e1-4203f9404939-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:43:57 crc kubenswrapper[5043]: I1125 08:43:57.726741 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgtwv\" (UniqueName: \"kubernetes.io/projected/8cac4e27-3572-4c74-86e1-4203f9404939-kube-api-access-vgtwv\") on node \"crc\" DevicePath \"\"" Nov 25 08:43:58 crc kubenswrapper[5043]: I1125 08:43:58.226504 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2pjq" event={"ID":"8cac4e27-3572-4c74-86e1-4203f9404939","Type":"ContainerDied","Data":"8a02059a8a5cb921c275910cd2923bea04066841d605aae14c90d7ecf8ab34d5"} Nov 25 08:43:58 crc kubenswrapper[5043]: I1125 08:43:58.226541 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2pjq" Nov 25 08:43:58 crc kubenswrapper[5043]: I1125 08:43:58.226568 5043 scope.go:117] "RemoveContainer" containerID="a61d58d499cf509491abd91451e874c60106243593e9542711bc6d563eece427" Nov 25 08:43:58 crc kubenswrapper[5043]: I1125 08:43:58.262132 5043 scope.go:117] "RemoveContainer" containerID="f83f872dcf6b2cd158d9f1524feb38d6ba022db6e5b415430aa04c0b0a330cc0" Nov 25 08:43:58 crc kubenswrapper[5043]: I1125 08:43:58.274582 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2pjq"] Nov 25 08:43:58 crc kubenswrapper[5043]: I1125 08:43:58.287476 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w2pjq"] Nov 25 08:43:58 crc kubenswrapper[5043]: I1125 08:43:58.321756 5043 scope.go:117] "RemoveContainer" containerID="4f3385fc9bd76c039c957473d3012d53324ad11fb188f489c898124477b9ff59" Nov 25 08:43:58 crc kubenswrapper[5043]: I1125 08:43:58.976579 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cac4e27-3572-4c74-86e1-4203f9404939" path="/var/lib/kubelet/pods/8cac4e27-3572-4c74-86e1-4203f9404939/volumes" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.181921 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj"] Nov 25 08:45:00 crc kubenswrapper[5043]: E1125 08:45:00.183187 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd3046f-d71b-4685-b76f-c23f2507b6e7" containerName="extract-utilities" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.183213 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd3046f-d71b-4685-b76f-c23f2507b6e7" containerName="extract-utilities" Nov 25 08:45:00 crc kubenswrapper[5043]: E1125 08:45:00.183231 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cac4e27-3572-4c74-86e1-4203f9404939" containerName="extract-content" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.183244 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cac4e27-3572-4c74-86e1-4203f9404939" containerName="extract-content" Nov 25 08:45:00 crc kubenswrapper[5043]: E1125 08:45:00.183277 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd3046f-d71b-4685-b76f-c23f2507b6e7" containerName="extract-content" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.183291 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd3046f-d71b-4685-b76f-c23f2507b6e7" containerName="extract-content" Nov 25 08:45:00 crc kubenswrapper[5043]: E1125 08:45:00.183314 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd3046f-d71b-4685-b76f-c23f2507b6e7" containerName="registry-server" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.183327 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd3046f-d71b-4685-b76f-c23f2507b6e7" containerName="registry-server" Nov 25 08:45:00 crc kubenswrapper[5043]: E1125 08:45:00.183364 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cac4e27-3572-4c74-86e1-4203f9404939" containerName="registry-server" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.183378 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cac4e27-3572-4c74-86e1-4203f9404939" containerName="registry-server" Nov 25 08:45:00 crc kubenswrapper[5043]: E1125 08:45:00.183407 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cac4e27-3572-4c74-86e1-4203f9404939" containerName="extract-utilities" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.183421 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cac4e27-3572-4c74-86e1-4203f9404939" containerName="extract-utilities" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.183816 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cac4e27-3572-4c74-86e1-4203f9404939" containerName="registry-server" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.183866 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd3046f-d71b-4685-b76f-c23f2507b6e7" containerName="registry-server" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.184994 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.187975 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.190221 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.211425 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj"] Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.282791 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3101bcb6-71b1-411f-bbc1-59562080339c-secret-volume\") pod \"collect-profiles-29401005-dq7xj\" (UID: \"3101bcb6-71b1-411f-bbc1-59562080339c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.282887 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4m2b\" (UniqueName: \"kubernetes.io/projected/3101bcb6-71b1-411f-bbc1-59562080339c-kube-api-access-x4m2b\") pod \"collect-profiles-29401005-dq7xj\" (UID: \"3101bcb6-71b1-411f-bbc1-59562080339c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.282967 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3101bcb6-71b1-411f-bbc1-59562080339c-config-volume\") pod \"collect-profiles-29401005-dq7xj\" (UID: \"3101bcb6-71b1-411f-bbc1-59562080339c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.384946 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3101bcb6-71b1-411f-bbc1-59562080339c-config-volume\") pod \"collect-profiles-29401005-dq7xj\" (UID: \"3101bcb6-71b1-411f-bbc1-59562080339c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.385339 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3101bcb6-71b1-411f-bbc1-59562080339c-secret-volume\") pod \"collect-profiles-29401005-dq7xj\" (UID: \"3101bcb6-71b1-411f-bbc1-59562080339c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.385405 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4m2b\" (UniqueName: \"kubernetes.io/projected/3101bcb6-71b1-411f-bbc1-59562080339c-kube-api-access-x4m2b\") pod \"collect-profiles-29401005-dq7xj\" (UID: \"3101bcb6-71b1-411f-bbc1-59562080339c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.385995 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3101bcb6-71b1-411f-bbc1-59562080339c-config-volume\") pod \"collect-profiles-29401005-dq7xj\" (UID: \"3101bcb6-71b1-411f-bbc1-59562080339c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.391854 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3101bcb6-71b1-411f-bbc1-59562080339c-secret-volume\") pod \"collect-profiles-29401005-dq7xj\" (UID: \"3101bcb6-71b1-411f-bbc1-59562080339c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.404210 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4m2b\" (UniqueName: \"kubernetes.io/projected/3101bcb6-71b1-411f-bbc1-59562080339c-kube-api-access-x4m2b\") pod \"collect-profiles-29401005-dq7xj\" (UID: \"3101bcb6-71b1-411f-bbc1-59562080339c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.511116 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj" Nov 25 08:45:00 crc kubenswrapper[5043]: I1125 08:45:00.980229 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj"] Nov 25 08:45:01 crc kubenswrapper[5043]: I1125 08:45:01.880347 5043 generic.go:334] "Generic (PLEG): container finished" podID="3101bcb6-71b1-411f-bbc1-59562080339c" containerID="4accfb17f4231570bacc5e7b8893336131c68bdc349dec8722a00f6ec8b5f41d" exitCode=0 Nov 25 08:45:01 crc kubenswrapper[5043]: I1125 08:45:01.880470 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj" event={"ID":"3101bcb6-71b1-411f-bbc1-59562080339c","Type":"ContainerDied","Data":"4accfb17f4231570bacc5e7b8893336131c68bdc349dec8722a00f6ec8b5f41d"} Nov 25 08:45:01 crc kubenswrapper[5043]: I1125 08:45:01.882242 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj" event={"ID":"3101bcb6-71b1-411f-bbc1-59562080339c","Type":"ContainerStarted","Data":"2026f6c3e1f03b77ab2ca413ccbe579edd231f31202e3044b811d4d5c9cdfd64"} Nov 25 08:45:03 crc kubenswrapper[5043]: I1125 08:45:03.322562 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj" Nov 25 08:45:03 crc kubenswrapper[5043]: I1125 08:45:03.455158 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4m2b\" (UniqueName: \"kubernetes.io/projected/3101bcb6-71b1-411f-bbc1-59562080339c-kube-api-access-x4m2b\") pod \"3101bcb6-71b1-411f-bbc1-59562080339c\" (UID: \"3101bcb6-71b1-411f-bbc1-59562080339c\") " Nov 25 08:45:03 crc kubenswrapper[5043]: I1125 08:45:03.455698 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3101bcb6-71b1-411f-bbc1-59562080339c-config-volume\") pod \"3101bcb6-71b1-411f-bbc1-59562080339c\" (UID: \"3101bcb6-71b1-411f-bbc1-59562080339c\") " Nov 25 08:45:03 crc kubenswrapper[5043]: I1125 08:45:03.455969 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3101bcb6-71b1-411f-bbc1-59562080339c-secret-volume\") pod \"3101bcb6-71b1-411f-bbc1-59562080339c\" (UID: \"3101bcb6-71b1-411f-bbc1-59562080339c\") " Nov 25 08:45:03 crc kubenswrapper[5043]: I1125 08:45:03.456302 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3101bcb6-71b1-411f-bbc1-59562080339c-config-volume" (OuterVolumeSpecName: "config-volume") pod "3101bcb6-71b1-411f-bbc1-59562080339c" (UID: "3101bcb6-71b1-411f-bbc1-59562080339c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 08:45:03 crc kubenswrapper[5043]: I1125 08:45:03.456597 5043 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3101bcb6-71b1-411f-bbc1-59562080339c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:45:03 crc kubenswrapper[5043]: I1125 08:45:03.464806 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3101bcb6-71b1-411f-bbc1-59562080339c-kube-api-access-x4m2b" (OuterVolumeSpecName: "kube-api-access-x4m2b") pod "3101bcb6-71b1-411f-bbc1-59562080339c" (UID: "3101bcb6-71b1-411f-bbc1-59562080339c"). InnerVolumeSpecName "kube-api-access-x4m2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:45:03 crc kubenswrapper[5043]: I1125 08:45:03.468155 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3101bcb6-71b1-411f-bbc1-59562080339c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3101bcb6-71b1-411f-bbc1-59562080339c" (UID: "3101bcb6-71b1-411f-bbc1-59562080339c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 08:45:03 crc kubenswrapper[5043]: I1125 08:45:03.558329 5043 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3101bcb6-71b1-411f-bbc1-59562080339c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 08:45:03 crc kubenswrapper[5043]: I1125 08:45:03.558376 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4m2b\" (UniqueName: \"kubernetes.io/projected/3101bcb6-71b1-411f-bbc1-59562080339c-kube-api-access-x4m2b\") on node \"crc\" DevicePath \"\"" Nov 25 08:45:03 crc kubenswrapper[5043]: I1125 08:45:03.907631 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj" event={"ID":"3101bcb6-71b1-411f-bbc1-59562080339c","Type":"ContainerDied","Data":"2026f6c3e1f03b77ab2ca413ccbe579edd231f31202e3044b811d4d5c9cdfd64"} Nov 25 08:45:03 crc kubenswrapper[5043]: I1125 08:45:03.907677 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2026f6c3e1f03b77ab2ca413ccbe579edd231f31202e3044b811d4d5c9cdfd64" Nov 25 08:45:03 crc kubenswrapper[5043]: I1125 08:45:03.907757 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj" Nov 25 08:45:04 crc kubenswrapper[5043]: I1125 08:45:04.417429 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw"] Nov 25 08:45:04 crc kubenswrapper[5043]: I1125 08:45:04.430040 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400960-lhszw"] Nov 25 08:45:04 crc kubenswrapper[5043]: I1125 08:45:04.985879 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a" path="/var/lib/kubelet/pods/ae6fdac8-8f1b-452a-b3c3-cb7b3e554f5a/volumes" Nov 25 08:45:54 crc kubenswrapper[5043]: I1125 08:45:54.099631 5043 scope.go:117] "RemoveContainer" containerID="414d33a6246e703eeeb2cdcb402b010cc39a8bd06ee3586116c3ddcc34b6e5b1" Nov 25 08:46:17 crc kubenswrapper[5043]: I1125 08:46:17.276128 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:46:17 crc kubenswrapper[5043]: I1125 08:46:17.276506 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:46:47 crc kubenswrapper[5043]: I1125 08:46:47.276441 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:46:47 crc kubenswrapper[5043]: I1125 08:46:47.276992 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:47:17 crc kubenswrapper[5043]: I1125 08:47:17.276397 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:47:17 crc kubenswrapper[5043]: I1125 08:47:17.276963 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:47:17 crc kubenswrapper[5043]: I1125 08:47:17.277006 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 08:47:17 crc kubenswrapper[5043]: I1125 08:47:17.277791 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"745d5b04de7afbcb0082f79a586522f585ce0c903de97daa463145606eb3a578"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:47:17 crc kubenswrapper[5043]: I1125 08:47:17.277851 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://745d5b04de7afbcb0082f79a586522f585ce0c903de97daa463145606eb3a578" gracePeriod=600 Nov 25 08:47:18 crc kubenswrapper[5043]: I1125 08:47:18.165559 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="745d5b04de7afbcb0082f79a586522f585ce0c903de97daa463145606eb3a578" exitCode=0 Nov 25 08:47:18 crc kubenswrapper[5043]: I1125 08:47:18.166237 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"745d5b04de7afbcb0082f79a586522f585ce0c903de97daa463145606eb3a578"} Nov 25 08:47:18 crc kubenswrapper[5043]: I1125 08:47:18.166273 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162"} Nov 25 08:47:18 crc kubenswrapper[5043]: I1125 08:47:18.166296 5043 scope.go:117] "RemoveContainer" containerID="9b7ec8836851ae7f0593d8c03bb57fb4f16a8cf0ec2635f6bebed9f5e3c3eae1" Nov 25 08:47:54 crc kubenswrapper[5043]: I1125 08:47:54.197003 5043 scope.go:117] "RemoveContainer" containerID="6875ffc6df220f25ea6908250b67325a19ec8b25fd376d971ef8be515a614bc7" Nov 25 08:47:54 crc kubenswrapper[5043]: I1125 08:47:54.220743 5043 scope.go:117] "RemoveContainer" containerID="34f467f77d7755fe897720cb95e9d8f527bb1e56f7bdb98174419e2b6457b861" Nov 25 08:47:54 crc kubenswrapper[5043]: I1125 08:47:54.277333 5043 scope.go:117] "RemoveContainer" containerID="20177aee51bab440af01f427bedde0340edf325e777e11790326776c9eb7edd6" Nov 25 08:48:01 crc kubenswrapper[5043]: I1125 08:48:01.749240 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6gbm9"] Nov 25 08:48:01 crc kubenswrapper[5043]: E1125 08:48:01.750802 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3101bcb6-71b1-411f-bbc1-59562080339c" containerName="collect-profiles" Nov 25 08:48:01 crc kubenswrapper[5043]: I1125 08:48:01.750883 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3101bcb6-71b1-411f-bbc1-59562080339c" containerName="collect-profiles" Nov 25 08:48:01 crc kubenswrapper[5043]: I1125 08:48:01.751144 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="3101bcb6-71b1-411f-bbc1-59562080339c" containerName="collect-profiles" Nov 25 08:48:01 crc kubenswrapper[5043]: I1125 08:48:01.752499 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gbm9" Nov 25 08:48:01 crc kubenswrapper[5043]: I1125 08:48:01.767095 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gbm9"] Nov 25 08:48:01 crc kubenswrapper[5043]: I1125 08:48:01.841259 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-utilities\") pod \"redhat-marketplace-6gbm9\" (UID: \"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60\") " pod="openshift-marketplace/redhat-marketplace-6gbm9" Nov 25 08:48:01 crc kubenswrapper[5043]: I1125 08:48:01.841358 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nlqh\" (UniqueName: \"kubernetes.io/projected/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-kube-api-access-4nlqh\") pod \"redhat-marketplace-6gbm9\" (UID: \"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60\") " pod="openshift-marketplace/redhat-marketplace-6gbm9" Nov 25 08:48:01 crc kubenswrapper[5043]: I1125 08:48:01.841485 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-catalog-content\") pod \"redhat-marketplace-6gbm9\" (UID: \"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60\") " pod="openshift-marketplace/redhat-marketplace-6gbm9" Nov 25 08:48:01 crc kubenswrapper[5043]: I1125 08:48:01.943494 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-utilities\") pod \"redhat-marketplace-6gbm9\" (UID: \"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60\") " pod="openshift-marketplace/redhat-marketplace-6gbm9" Nov 25 08:48:01 crc kubenswrapper[5043]: I1125 08:48:01.943541 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nlqh\" (UniqueName: \"kubernetes.io/projected/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-kube-api-access-4nlqh\") pod \"redhat-marketplace-6gbm9\" (UID: \"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60\") " pod="openshift-marketplace/redhat-marketplace-6gbm9" Nov 25 08:48:01 crc kubenswrapper[5043]: I1125 08:48:01.943626 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-catalog-content\") pod \"redhat-marketplace-6gbm9\" (UID: \"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60\") " pod="openshift-marketplace/redhat-marketplace-6gbm9" Nov 25 08:48:01 crc kubenswrapper[5043]: I1125 08:48:01.944182 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-catalog-content\") pod \"redhat-marketplace-6gbm9\" (UID: \"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60\") " pod="openshift-marketplace/redhat-marketplace-6gbm9" Nov 25 08:48:01 crc kubenswrapper[5043]: I1125 08:48:01.944177 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-utilities\") pod \"redhat-marketplace-6gbm9\" (UID: \"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60\") " pod="openshift-marketplace/redhat-marketplace-6gbm9" Nov 25 08:48:01 crc kubenswrapper[5043]: I1125 08:48:01.963244 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nlqh\" (UniqueName: \"kubernetes.io/projected/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-kube-api-access-4nlqh\") pod \"redhat-marketplace-6gbm9\" (UID: \"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60\") " pod="openshift-marketplace/redhat-marketplace-6gbm9" Nov 25 08:48:02 crc kubenswrapper[5043]: I1125 08:48:02.084800 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gbm9" Nov 25 08:48:02 crc kubenswrapper[5043]: I1125 08:48:02.598581 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gbm9"] Nov 25 08:48:02 crc kubenswrapper[5043]: I1125 08:48:02.617883 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gbm9" event={"ID":"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60","Type":"ContainerStarted","Data":"d323ed5a2fa264e110b51e0cc25a7c01f17ad5eeaf61d8193d768f035eac8259"} Nov 25 08:48:03 crc kubenswrapper[5043]: I1125 08:48:03.634850 5043 generic.go:334] "Generic (PLEG): container finished" podID="c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60" containerID="4fda3e3e96a5214531207b60bd2be9bc935f0f7a6dfdc3cd5e55b4bd8142f76c" exitCode=0 Nov 25 08:48:03 crc kubenswrapper[5043]: I1125 08:48:03.634927 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gbm9" event={"ID":"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60","Type":"ContainerDied","Data":"4fda3e3e96a5214531207b60bd2be9bc935f0f7a6dfdc3cd5e55b4bd8142f76c"} Nov 25 08:48:05 crc kubenswrapper[5043]: I1125 08:48:05.659467 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gbm9" event={"ID":"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60","Type":"ContainerStarted","Data":"38126c1ede8e53be79338c967bf6740542d2b2af81d4b63b5e1bf8478db74d6d"} Nov 25 08:48:06 crc kubenswrapper[5043]: I1125 08:48:06.674505 5043 generic.go:334] "Generic (PLEG): container finished" podID="c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60" containerID="38126c1ede8e53be79338c967bf6740542d2b2af81d4b63b5e1bf8478db74d6d" exitCode=0 Nov 25 08:48:06 crc kubenswrapper[5043]: I1125 08:48:06.674787 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gbm9" event={"ID":"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60","Type":"ContainerDied","Data":"38126c1ede8e53be79338c967bf6740542d2b2af81d4b63b5e1bf8478db74d6d"} Nov 25 08:48:07 crc kubenswrapper[5043]: I1125 08:48:07.688748 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gbm9" event={"ID":"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60","Type":"ContainerStarted","Data":"ab758423ad3ab2f04653d7ea85ef9a9a76e1e69e5cc98adf74cf0c5aba00d51c"} Nov 25 08:48:07 crc kubenswrapper[5043]: I1125 08:48:07.713154 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6gbm9" podStartSLOduration=3.223839144 podStartE2EDuration="6.713134718s" podCreationTimestamp="2025-11-25 08:48:01 +0000 UTC" firstStartedPulling="2025-11-25 08:48:03.640039341 +0000 UTC m=+5547.808235072" lastFinishedPulling="2025-11-25 08:48:07.129334925 +0000 UTC m=+5551.297530646" observedRunningTime="2025-11-25 08:48:07.706916999 +0000 UTC m=+5551.875112720" watchObservedRunningTime="2025-11-25 08:48:07.713134718 +0000 UTC m=+5551.881330429" Nov 25 08:48:12 crc kubenswrapper[5043]: I1125 08:48:12.085501 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6gbm9" Nov 25 08:48:12 crc kubenswrapper[5043]: I1125 08:48:12.086081 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6gbm9" Nov 25 08:48:12 crc kubenswrapper[5043]: I1125 08:48:12.143289 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6gbm9" Nov 25 08:48:12 crc kubenswrapper[5043]: I1125 08:48:12.794330 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6gbm9" Nov 25 08:48:12 crc kubenswrapper[5043]: I1125 08:48:12.847529 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gbm9"] Nov 25 08:48:14 crc kubenswrapper[5043]: I1125 08:48:14.755912 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6gbm9" podUID="c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60" containerName="registry-server" containerID="cri-o://ab758423ad3ab2f04653d7ea85ef9a9a76e1e69e5cc98adf74cf0c5aba00d51c" gracePeriod=2 Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.420720 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gbm9" Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.516753 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nlqh\" (UniqueName: \"kubernetes.io/projected/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-kube-api-access-4nlqh\") pod \"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60\" (UID: \"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60\") " Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.516884 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-catalog-content\") pod \"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60\" (UID: \"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60\") " Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.516950 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-utilities\") pod \"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60\" (UID: \"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60\") " Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.518563 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-utilities" (OuterVolumeSpecName: "utilities") pod "c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60" (UID: "c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.538760 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-kube-api-access-4nlqh" (OuterVolumeSpecName: "kube-api-access-4nlqh") pod "c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60" (UID: "c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60"). InnerVolumeSpecName "kube-api-access-4nlqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.540596 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60" (UID: "c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.619785 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nlqh\" (UniqueName: \"kubernetes.io/projected/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-kube-api-access-4nlqh\") on node \"crc\" DevicePath \"\"" Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.619860 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.619871 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.770442 5043 generic.go:334] "Generic (PLEG): container finished" podID="c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60" containerID="ab758423ad3ab2f04653d7ea85ef9a9a76e1e69e5cc98adf74cf0c5aba00d51c" exitCode=0 Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.770497 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gbm9" event={"ID":"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60","Type":"ContainerDied","Data":"ab758423ad3ab2f04653d7ea85ef9a9a76e1e69e5cc98adf74cf0c5aba00d51c"} Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.770521 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gbm9" event={"ID":"c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60","Type":"ContainerDied","Data":"d323ed5a2fa264e110b51e0cc25a7c01f17ad5eeaf61d8193d768f035eac8259"} Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.770543 5043 scope.go:117] "RemoveContainer" containerID="ab758423ad3ab2f04653d7ea85ef9a9a76e1e69e5cc98adf74cf0c5aba00d51c" Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.770891 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gbm9" Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.810632 5043 scope.go:117] "RemoveContainer" containerID="38126c1ede8e53be79338c967bf6740542d2b2af81d4b63b5e1bf8478db74d6d" Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.818119 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gbm9"] Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.847181 5043 scope.go:117] "RemoveContainer" containerID="4fda3e3e96a5214531207b60bd2be9bc935f0f7a6dfdc3cd5e55b4bd8142f76c" Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.852090 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gbm9"] Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.906943 5043 scope.go:117] "RemoveContainer" containerID="ab758423ad3ab2f04653d7ea85ef9a9a76e1e69e5cc98adf74cf0c5aba00d51c" Nov 25 08:48:15 crc kubenswrapper[5043]: E1125 08:48:15.907447 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab758423ad3ab2f04653d7ea85ef9a9a76e1e69e5cc98adf74cf0c5aba00d51c\": container with ID starting with ab758423ad3ab2f04653d7ea85ef9a9a76e1e69e5cc98adf74cf0c5aba00d51c not found: ID does not exist" containerID="ab758423ad3ab2f04653d7ea85ef9a9a76e1e69e5cc98adf74cf0c5aba00d51c" Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.907488 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab758423ad3ab2f04653d7ea85ef9a9a76e1e69e5cc98adf74cf0c5aba00d51c"} err="failed to get container status \"ab758423ad3ab2f04653d7ea85ef9a9a76e1e69e5cc98adf74cf0c5aba00d51c\": rpc error: code = NotFound desc = could not find container \"ab758423ad3ab2f04653d7ea85ef9a9a76e1e69e5cc98adf74cf0c5aba00d51c\": container with ID starting with ab758423ad3ab2f04653d7ea85ef9a9a76e1e69e5cc98adf74cf0c5aba00d51c not found: ID does not exist" Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.907515 5043 scope.go:117] "RemoveContainer" containerID="38126c1ede8e53be79338c967bf6740542d2b2af81d4b63b5e1bf8478db74d6d" Nov 25 08:48:15 crc kubenswrapper[5043]: E1125 08:48:15.908247 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38126c1ede8e53be79338c967bf6740542d2b2af81d4b63b5e1bf8478db74d6d\": container with ID starting with 38126c1ede8e53be79338c967bf6740542d2b2af81d4b63b5e1bf8478db74d6d not found: ID does not exist" containerID="38126c1ede8e53be79338c967bf6740542d2b2af81d4b63b5e1bf8478db74d6d" Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.908282 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38126c1ede8e53be79338c967bf6740542d2b2af81d4b63b5e1bf8478db74d6d"} err="failed to get container status \"38126c1ede8e53be79338c967bf6740542d2b2af81d4b63b5e1bf8478db74d6d\": rpc error: code = NotFound desc = could not find container \"38126c1ede8e53be79338c967bf6740542d2b2af81d4b63b5e1bf8478db74d6d\": container with ID starting with 38126c1ede8e53be79338c967bf6740542d2b2af81d4b63b5e1bf8478db74d6d not found: ID does not exist" Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.908311 5043 scope.go:117] "RemoveContainer" containerID="4fda3e3e96a5214531207b60bd2be9bc935f0f7a6dfdc3cd5e55b4bd8142f76c" Nov 25 08:48:15 crc kubenswrapper[5043]: E1125 08:48:15.908568 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fda3e3e96a5214531207b60bd2be9bc935f0f7a6dfdc3cd5e55b4bd8142f76c\": container with ID starting with 4fda3e3e96a5214531207b60bd2be9bc935f0f7a6dfdc3cd5e55b4bd8142f76c not found: ID does not exist" containerID="4fda3e3e96a5214531207b60bd2be9bc935f0f7a6dfdc3cd5e55b4bd8142f76c" Nov 25 08:48:15 crc kubenswrapper[5043]: I1125 08:48:15.908612 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fda3e3e96a5214531207b60bd2be9bc935f0f7a6dfdc3cd5e55b4bd8142f76c"} err="failed to get container status \"4fda3e3e96a5214531207b60bd2be9bc935f0f7a6dfdc3cd5e55b4bd8142f76c\": rpc error: code = NotFound desc = could not find container \"4fda3e3e96a5214531207b60bd2be9bc935f0f7a6dfdc3cd5e55b4bd8142f76c\": container with ID starting with 4fda3e3e96a5214531207b60bd2be9bc935f0f7a6dfdc3cd5e55b4bd8142f76c not found: ID does not exist" Nov 25 08:48:17 crc kubenswrapper[5043]: I1125 08:48:17.000073 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60" path="/var/lib/kubelet/pods/c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60/volumes" Nov 25 08:49:17 crc kubenswrapper[5043]: I1125 08:49:17.275895 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:49:17 crc kubenswrapper[5043]: I1125 08:49:17.276563 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:49:47 crc kubenswrapper[5043]: I1125 08:49:47.276874 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:49:47 crc kubenswrapper[5043]: I1125 08:49:47.277338 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:50:17 crc kubenswrapper[5043]: I1125 08:50:17.276141 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:50:17 crc kubenswrapper[5043]: I1125 08:50:17.277581 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:50:17 crc kubenswrapper[5043]: I1125 08:50:17.277740 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 08:50:17 crc kubenswrapper[5043]: I1125 08:50:17.278544 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:50:17 crc kubenswrapper[5043]: I1125 08:50:17.278711 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" gracePeriod=600 Nov 25 08:50:17 crc kubenswrapper[5043]: E1125 08:50:17.402690 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:50:17 crc kubenswrapper[5043]: I1125 08:50:17.960106 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" exitCode=0 Nov 25 08:50:17 crc kubenswrapper[5043]: I1125 08:50:17.960146 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162"} Nov 25 08:50:17 crc kubenswrapper[5043]: I1125 08:50:17.960178 5043 scope.go:117] "RemoveContainer" containerID="745d5b04de7afbcb0082f79a586522f585ce0c903de97daa463145606eb3a578" Nov 25 08:50:17 crc kubenswrapper[5043]: I1125 08:50:17.960817 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:50:17 crc kubenswrapper[5043]: E1125 08:50:17.961101 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:50:32 crc kubenswrapper[5043]: I1125 08:50:32.963066 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:50:32 crc kubenswrapper[5043]: E1125 08:50:32.964242 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:50:44 crc kubenswrapper[5043]: I1125 08:50:44.963449 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:50:44 crc kubenswrapper[5043]: E1125 08:50:44.964302 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:50:58 crc kubenswrapper[5043]: I1125 08:50:58.963260 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:50:58 crc kubenswrapper[5043]: E1125 08:50:58.964731 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:51:11 crc kubenswrapper[5043]: I1125 08:51:11.963641 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:51:11 crc kubenswrapper[5043]: E1125 08:51:11.965013 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:51:23 crc kubenswrapper[5043]: I1125 08:51:23.963816 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:51:23 crc kubenswrapper[5043]: E1125 08:51:23.964813 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.213792 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nk76d"] Nov 25 08:51:36 crc kubenswrapper[5043]: E1125 08:51:36.214559 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60" containerName="extract-utilities" Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.214572 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60" containerName="extract-utilities" Nov 25 08:51:36 crc kubenswrapper[5043]: E1125 08:51:36.214585 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60" containerName="extract-content" Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.214591 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60" containerName="extract-content" Nov 25 08:51:36 crc kubenswrapper[5043]: E1125 08:51:36.214618 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60" containerName="registry-server" Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.214625 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60" containerName="registry-server" Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.214833 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41b8042-83fb-47e9-8e2f-b3ac0c3e4c60" containerName="registry-server" Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.216097 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nk76d" Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.231369 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nk76d"] Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.370025 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f05c4026-d046-4205-a394-3f555317d4bf-catalog-content\") pod \"certified-operators-nk76d\" (UID: \"f05c4026-d046-4205-a394-3f555317d4bf\") " pod="openshift-marketplace/certified-operators-nk76d" Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.370372 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f05c4026-d046-4205-a394-3f555317d4bf-utilities\") pod \"certified-operators-nk76d\" (UID: \"f05c4026-d046-4205-a394-3f555317d4bf\") " pod="openshift-marketplace/certified-operators-nk76d" Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.370497 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2829b\" (UniqueName: \"kubernetes.io/projected/f05c4026-d046-4205-a394-3f555317d4bf-kube-api-access-2829b\") pod \"certified-operators-nk76d\" (UID: \"f05c4026-d046-4205-a394-3f555317d4bf\") " pod="openshift-marketplace/certified-operators-nk76d" Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.472137 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f05c4026-d046-4205-a394-3f555317d4bf-catalog-content\") pod \"certified-operators-nk76d\" (UID: \"f05c4026-d046-4205-a394-3f555317d4bf\") " pod="openshift-marketplace/certified-operators-nk76d" Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.472184 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f05c4026-d046-4205-a394-3f555317d4bf-utilities\") pod \"certified-operators-nk76d\" (UID: \"f05c4026-d046-4205-a394-3f555317d4bf\") " pod="openshift-marketplace/certified-operators-nk76d" Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.472216 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2829b\" (UniqueName: \"kubernetes.io/projected/f05c4026-d046-4205-a394-3f555317d4bf-kube-api-access-2829b\") pod \"certified-operators-nk76d\" (UID: \"f05c4026-d046-4205-a394-3f555317d4bf\") " pod="openshift-marketplace/certified-operators-nk76d" Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.472568 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f05c4026-d046-4205-a394-3f555317d4bf-catalog-content\") pod \"certified-operators-nk76d\" (UID: \"f05c4026-d046-4205-a394-3f555317d4bf\") " pod="openshift-marketplace/certified-operators-nk76d" Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.473375 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f05c4026-d046-4205-a394-3f555317d4bf-utilities\") pod \"certified-operators-nk76d\" (UID: \"f05c4026-d046-4205-a394-3f555317d4bf\") " pod="openshift-marketplace/certified-operators-nk76d" Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.496916 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2829b\" (UniqueName: \"kubernetes.io/projected/f05c4026-d046-4205-a394-3f555317d4bf-kube-api-access-2829b\") pod \"certified-operators-nk76d\" (UID: \"f05c4026-d046-4205-a394-3f555317d4bf\") " pod="openshift-marketplace/certified-operators-nk76d" Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.543129 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nk76d" Nov 25 08:51:36 crc kubenswrapper[5043]: I1125 08:51:36.968726 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:51:36 crc kubenswrapper[5043]: E1125 08:51:36.969548 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:51:37 crc kubenswrapper[5043]: I1125 08:51:37.056592 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nk76d"] Nov 25 08:51:37 crc kubenswrapper[5043]: I1125 08:51:37.763295 5043 generic.go:334] "Generic (PLEG): container finished" podID="f05c4026-d046-4205-a394-3f555317d4bf" containerID="ca797bd17a7eda43265343451ac4c9de546d0da61e954a2791e0704d2d31cf49" exitCode=0 Nov 25 08:51:37 crc kubenswrapper[5043]: I1125 08:51:37.763523 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk76d" event={"ID":"f05c4026-d046-4205-a394-3f555317d4bf","Type":"ContainerDied","Data":"ca797bd17a7eda43265343451ac4c9de546d0da61e954a2791e0704d2d31cf49"} Nov 25 08:51:37 crc kubenswrapper[5043]: I1125 08:51:37.763572 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk76d" event={"ID":"f05c4026-d046-4205-a394-3f555317d4bf","Type":"ContainerStarted","Data":"2eabbc962eb45ff66ca0b85c6232caef57b5244fa2d7bcec414df590712a3a6d"} Nov 25 08:51:37 crc kubenswrapper[5043]: I1125 08:51:37.765499 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:51:39 crc kubenswrapper[5043]: E1125 08:51:39.322187 5043 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf05c4026_d046_4205_a394_3f555317d4bf.slice/crio-3823a5a5e4053a45d07d3959449de2504c38bd059d1a7fefd1ac3ef6986efa4f.scope\": RecentStats: unable to find data in memory cache]" Nov 25 08:51:39 crc kubenswrapper[5043]: I1125 08:51:39.787103 5043 generic.go:334] "Generic (PLEG): container finished" podID="f05c4026-d046-4205-a394-3f555317d4bf" containerID="3823a5a5e4053a45d07d3959449de2504c38bd059d1a7fefd1ac3ef6986efa4f" exitCode=0 Nov 25 08:51:39 crc kubenswrapper[5043]: I1125 08:51:39.787224 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk76d" event={"ID":"f05c4026-d046-4205-a394-3f555317d4bf","Type":"ContainerDied","Data":"3823a5a5e4053a45d07d3959449de2504c38bd059d1a7fefd1ac3ef6986efa4f"} Nov 25 08:51:40 crc kubenswrapper[5043]: I1125 08:51:40.800266 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk76d" event={"ID":"f05c4026-d046-4205-a394-3f555317d4bf","Type":"ContainerStarted","Data":"f361b64c94d3257e54388ec93a0f65d948a49b0c694770ce3115fc39f723272e"} Nov 25 08:51:40 crc kubenswrapper[5043]: I1125 08:51:40.857754 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nk76d" podStartSLOduration=2.419585358 podStartE2EDuration="4.857727762s" podCreationTimestamp="2025-11-25 08:51:36 +0000 UTC" firstStartedPulling="2025-11-25 08:51:37.76508786 +0000 UTC m=+5761.933283601" lastFinishedPulling="2025-11-25 08:51:40.203230284 +0000 UTC m=+5764.371426005" observedRunningTime="2025-11-25 08:51:40.826038838 +0000 UTC m=+5764.994234559" watchObservedRunningTime="2025-11-25 08:51:40.857727762 +0000 UTC m=+5765.025923483" Nov 25 08:51:46 crc kubenswrapper[5043]: I1125 08:51:46.543799 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nk76d" Nov 25 08:51:46 crc kubenswrapper[5043]: I1125 08:51:46.544444 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nk76d" Nov 25 08:51:46 crc kubenswrapper[5043]: I1125 08:51:46.606714 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nk76d" Nov 25 08:51:46 crc kubenswrapper[5043]: I1125 08:51:46.912224 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nk76d" Nov 25 08:51:46 crc kubenswrapper[5043]: I1125 08:51:46.975222 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nk76d"] Nov 25 08:51:48 crc kubenswrapper[5043]: I1125 08:51:48.881640 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nk76d" podUID="f05c4026-d046-4205-a394-3f555317d4bf" containerName="registry-server" containerID="cri-o://f361b64c94d3257e54388ec93a0f65d948a49b0c694770ce3115fc39f723272e" gracePeriod=2 Nov 25 08:51:48 crc kubenswrapper[5043]: I1125 08:51:48.963222 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:51:48 crc kubenswrapper[5043]: E1125 08:51:48.963541 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.524528 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nk76d" Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.562799 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f05c4026-d046-4205-a394-3f555317d4bf-utilities\") pod \"f05c4026-d046-4205-a394-3f555317d4bf\" (UID: \"f05c4026-d046-4205-a394-3f555317d4bf\") " Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.562985 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f05c4026-d046-4205-a394-3f555317d4bf-catalog-content\") pod \"f05c4026-d046-4205-a394-3f555317d4bf\" (UID: \"f05c4026-d046-4205-a394-3f555317d4bf\") " Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.563088 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2829b\" (UniqueName: \"kubernetes.io/projected/f05c4026-d046-4205-a394-3f555317d4bf-kube-api-access-2829b\") pod \"f05c4026-d046-4205-a394-3f555317d4bf\" (UID: \"f05c4026-d046-4205-a394-3f555317d4bf\") " Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.580235 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f05c4026-d046-4205-a394-3f555317d4bf-kube-api-access-2829b" (OuterVolumeSpecName: "kube-api-access-2829b") pod "f05c4026-d046-4205-a394-3f555317d4bf" (UID: "f05c4026-d046-4205-a394-3f555317d4bf"). InnerVolumeSpecName "kube-api-access-2829b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.599821 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f05c4026-d046-4205-a394-3f555317d4bf-utilities" (OuterVolumeSpecName: "utilities") pod "f05c4026-d046-4205-a394-3f555317d4bf" (UID: "f05c4026-d046-4205-a394-3f555317d4bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.661926 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f05c4026-d046-4205-a394-3f555317d4bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f05c4026-d046-4205-a394-3f555317d4bf" (UID: "f05c4026-d046-4205-a394-3f555317d4bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.672641 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f05c4026-d046-4205-a394-3f555317d4bf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.672677 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2829b\" (UniqueName: \"kubernetes.io/projected/f05c4026-d046-4205-a394-3f555317d4bf-kube-api-access-2829b\") on node \"crc\" DevicePath \"\"" Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.672689 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f05c4026-d046-4205-a394-3f555317d4bf-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.892584 5043 generic.go:334] "Generic (PLEG): container finished" podID="f05c4026-d046-4205-a394-3f555317d4bf" containerID="f361b64c94d3257e54388ec93a0f65d948a49b0c694770ce3115fc39f723272e" exitCode=0 Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.892648 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk76d" event={"ID":"f05c4026-d046-4205-a394-3f555317d4bf","Type":"ContainerDied","Data":"f361b64c94d3257e54388ec93a0f65d948a49b0c694770ce3115fc39f723272e"} Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.892674 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk76d" event={"ID":"f05c4026-d046-4205-a394-3f555317d4bf","Type":"ContainerDied","Data":"2eabbc962eb45ff66ca0b85c6232caef57b5244fa2d7bcec414df590712a3a6d"} Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.892693 5043 scope.go:117] "RemoveContainer" containerID="f361b64c94d3257e54388ec93a0f65d948a49b0c694770ce3115fc39f723272e" Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.892717 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nk76d" Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.939631 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nk76d"] Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.941549 5043 scope.go:117] "RemoveContainer" containerID="3823a5a5e4053a45d07d3959449de2504c38bd059d1a7fefd1ac3ef6986efa4f" Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.951299 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nk76d"] Nov 25 08:51:49 crc kubenswrapper[5043]: I1125 08:51:49.962369 5043 scope.go:117] "RemoveContainer" containerID="ca797bd17a7eda43265343451ac4c9de546d0da61e954a2791e0704d2d31cf49" Nov 25 08:51:50 crc kubenswrapper[5043]: I1125 08:51:50.011107 5043 scope.go:117] "RemoveContainer" containerID="f361b64c94d3257e54388ec93a0f65d948a49b0c694770ce3115fc39f723272e" Nov 25 08:51:50 crc kubenswrapper[5043]: E1125 08:51:50.011583 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f361b64c94d3257e54388ec93a0f65d948a49b0c694770ce3115fc39f723272e\": container with ID starting with f361b64c94d3257e54388ec93a0f65d948a49b0c694770ce3115fc39f723272e not found: ID does not exist" containerID="f361b64c94d3257e54388ec93a0f65d948a49b0c694770ce3115fc39f723272e" Nov 25 08:51:50 crc kubenswrapper[5043]: I1125 08:51:50.011673 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f361b64c94d3257e54388ec93a0f65d948a49b0c694770ce3115fc39f723272e"} err="failed to get container status \"f361b64c94d3257e54388ec93a0f65d948a49b0c694770ce3115fc39f723272e\": rpc error: code = NotFound desc = could not find container \"f361b64c94d3257e54388ec93a0f65d948a49b0c694770ce3115fc39f723272e\": container with ID starting with f361b64c94d3257e54388ec93a0f65d948a49b0c694770ce3115fc39f723272e not found: ID does not exist" Nov 25 08:51:50 crc kubenswrapper[5043]: I1125 08:51:50.011706 5043 scope.go:117] "RemoveContainer" containerID="3823a5a5e4053a45d07d3959449de2504c38bd059d1a7fefd1ac3ef6986efa4f" Nov 25 08:51:50 crc kubenswrapper[5043]: E1125 08:51:50.012266 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3823a5a5e4053a45d07d3959449de2504c38bd059d1a7fefd1ac3ef6986efa4f\": container with ID starting with 3823a5a5e4053a45d07d3959449de2504c38bd059d1a7fefd1ac3ef6986efa4f not found: ID does not exist" containerID="3823a5a5e4053a45d07d3959449de2504c38bd059d1a7fefd1ac3ef6986efa4f" Nov 25 08:51:50 crc kubenswrapper[5043]: I1125 08:51:50.012313 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3823a5a5e4053a45d07d3959449de2504c38bd059d1a7fefd1ac3ef6986efa4f"} err="failed to get container status \"3823a5a5e4053a45d07d3959449de2504c38bd059d1a7fefd1ac3ef6986efa4f\": rpc error: code = NotFound desc = could not find container \"3823a5a5e4053a45d07d3959449de2504c38bd059d1a7fefd1ac3ef6986efa4f\": container with ID starting with 3823a5a5e4053a45d07d3959449de2504c38bd059d1a7fefd1ac3ef6986efa4f not found: ID does not exist" Nov 25 08:51:50 crc kubenswrapper[5043]: I1125 08:51:50.012345 5043 scope.go:117] "RemoveContainer" containerID="ca797bd17a7eda43265343451ac4c9de546d0da61e954a2791e0704d2d31cf49" Nov 25 08:51:50 crc kubenswrapper[5043]: E1125 08:51:50.012670 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca797bd17a7eda43265343451ac4c9de546d0da61e954a2791e0704d2d31cf49\": container with ID starting with ca797bd17a7eda43265343451ac4c9de546d0da61e954a2791e0704d2d31cf49 not found: ID does not exist" containerID="ca797bd17a7eda43265343451ac4c9de546d0da61e954a2791e0704d2d31cf49" Nov 25 08:51:50 crc kubenswrapper[5043]: I1125 08:51:50.012700 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca797bd17a7eda43265343451ac4c9de546d0da61e954a2791e0704d2d31cf49"} err="failed to get container status \"ca797bd17a7eda43265343451ac4c9de546d0da61e954a2791e0704d2d31cf49\": rpc error: code = NotFound desc = could not find container \"ca797bd17a7eda43265343451ac4c9de546d0da61e954a2791e0704d2d31cf49\": container with ID starting with ca797bd17a7eda43265343451ac4c9de546d0da61e954a2791e0704d2d31cf49 not found: ID does not exist" Nov 25 08:51:50 crc kubenswrapper[5043]: I1125 08:51:50.973043 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f05c4026-d046-4205-a394-3f555317d4bf" path="/var/lib/kubelet/pods/f05c4026-d046-4205-a394-3f555317d4bf/volumes" Nov 25 08:52:01 crc kubenswrapper[5043]: I1125 08:52:01.962780 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:52:01 crc kubenswrapper[5043]: E1125 08:52:01.963787 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:52:16 crc kubenswrapper[5043]: I1125 08:52:16.974997 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:52:16 crc kubenswrapper[5043]: E1125 08:52:16.976018 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:52:30 crc kubenswrapper[5043]: I1125 08:52:30.962963 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:52:30 crc kubenswrapper[5043]: E1125 08:52:30.964065 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:52:44 crc kubenswrapper[5043]: I1125 08:52:44.963239 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:52:44 crc kubenswrapper[5043]: E1125 08:52:44.964471 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:52:55 crc kubenswrapper[5043]: I1125 08:52:55.963139 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:52:55 crc kubenswrapper[5043]: E1125 08:52:55.963890 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:53:08 crc kubenswrapper[5043]: I1125 08:53:08.966292 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:53:08 crc kubenswrapper[5043]: E1125 08:53:08.967243 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:53:21 crc kubenswrapper[5043]: I1125 08:53:21.962858 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:53:21 crc kubenswrapper[5043]: E1125 08:53:21.963524 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:53:29 crc kubenswrapper[5043]: I1125 08:53:29.352213 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2wt55"] Nov 25 08:53:29 crc kubenswrapper[5043]: E1125 08:53:29.353203 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05c4026-d046-4205-a394-3f555317d4bf" containerName="extract-utilities" Nov 25 08:53:29 crc kubenswrapper[5043]: I1125 08:53:29.353218 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05c4026-d046-4205-a394-3f555317d4bf" containerName="extract-utilities" Nov 25 08:53:29 crc kubenswrapper[5043]: E1125 08:53:29.353267 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05c4026-d046-4205-a394-3f555317d4bf" containerName="extract-content" Nov 25 08:53:29 crc kubenswrapper[5043]: I1125 08:53:29.353275 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05c4026-d046-4205-a394-3f555317d4bf" containerName="extract-content" Nov 25 08:53:29 crc kubenswrapper[5043]: E1125 08:53:29.353292 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05c4026-d046-4205-a394-3f555317d4bf" containerName="registry-server" Nov 25 08:53:29 crc kubenswrapper[5043]: I1125 08:53:29.353299 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05c4026-d046-4205-a394-3f555317d4bf" containerName="registry-server" Nov 25 08:53:29 crc kubenswrapper[5043]: I1125 08:53:29.353514 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="f05c4026-d046-4205-a394-3f555317d4bf" containerName="registry-server" Nov 25 08:53:29 crc kubenswrapper[5043]: I1125 08:53:29.355250 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wt55" Nov 25 08:53:29 crc kubenswrapper[5043]: I1125 08:53:29.376685 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2wt55"] Nov 25 08:53:29 crc kubenswrapper[5043]: I1125 08:53:29.481775 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/053d4c47-49a2-4bc3-8e17-9d097da24fe8-utilities\") pod \"redhat-operators-2wt55\" (UID: \"053d4c47-49a2-4bc3-8e17-9d097da24fe8\") " pod="openshift-marketplace/redhat-operators-2wt55" Nov 25 08:53:29 crc kubenswrapper[5043]: I1125 08:53:29.481846 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l2h6\" (UniqueName: \"kubernetes.io/projected/053d4c47-49a2-4bc3-8e17-9d097da24fe8-kube-api-access-5l2h6\") pod \"redhat-operators-2wt55\" (UID: \"053d4c47-49a2-4bc3-8e17-9d097da24fe8\") " pod="openshift-marketplace/redhat-operators-2wt55" Nov 25 08:53:29 crc kubenswrapper[5043]: I1125 08:53:29.482202 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/053d4c47-49a2-4bc3-8e17-9d097da24fe8-catalog-content\") pod \"redhat-operators-2wt55\" (UID: \"053d4c47-49a2-4bc3-8e17-9d097da24fe8\") " pod="openshift-marketplace/redhat-operators-2wt55" Nov 25 08:53:29 crc kubenswrapper[5043]: I1125 08:53:29.583540 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/053d4c47-49a2-4bc3-8e17-9d097da24fe8-utilities\") pod \"redhat-operators-2wt55\" (UID: \"053d4c47-49a2-4bc3-8e17-9d097da24fe8\") " pod="openshift-marketplace/redhat-operators-2wt55" Nov 25 08:53:29 crc kubenswrapper[5043]: I1125 08:53:29.583618 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l2h6\" (UniqueName: \"kubernetes.io/projected/053d4c47-49a2-4bc3-8e17-9d097da24fe8-kube-api-access-5l2h6\") pod \"redhat-operators-2wt55\" (UID: \"053d4c47-49a2-4bc3-8e17-9d097da24fe8\") " pod="openshift-marketplace/redhat-operators-2wt55" Nov 25 08:53:29 crc kubenswrapper[5043]: I1125 08:53:29.583710 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/053d4c47-49a2-4bc3-8e17-9d097da24fe8-catalog-content\") pod \"redhat-operators-2wt55\" (UID: \"053d4c47-49a2-4bc3-8e17-9d097da24fe8\") " pod="openshift-marketplace/redhat-operators-2wt55" Nov 25 08:53:29 crc kubenswrapper[5043]: I1125 08:53:29.584087 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/053d4c47-49a2-4bc3-8e17-9d097da24fe8-catalog-content\") pod \"redhat-operators-2wt55\" (UID: \"053d4c47-49a2-4bc3-8e17-9d097da24fe8\") " pod="openshift-marketplace/redhat-operators-2wt55" Nov 25 08:53:29 crc kubenswrapper[5043]: I1125 08:53:29.584088 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/053d4c47-49a2-4bc3-8e17-9d097da24fe8-utilities\") pod \"redhat-operators-2wt55\" (UID: \"053d4c47-49a2-4bc3-8e17-9d097da24fe8\") " pod="openshift-marketplace/redhat-operators-2wt55" Nov 25 08:53:29 crc kubenswrapper[5043]: I1125 08:53:29.610936 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l2h6\" (UniqueName: \"kubernetes.io/projected/053d4c47-49a2-4bc3-8e17-9d097da24fe8-kube-api-access-5l2h6\") pod \"redhat-operators-2wt55\" (UID: \"053d4c47-49a2-4bc3-8e17-9d097da24fe8\") " pod="openshift-marketplace/redhat-operators-2wt55" Nov 25 08:53:29 crc kubenswrapper[5043]: I1125 08:53:29.703154 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wt55" Nov 25 08:53:30 crc kubenswrapper[5043]: I1125 08:53:30.175058 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2wt55"] Nov 25 08:53:30 crc kubenswrapper[5043]: I1125 08:53:30.729876 5043 generic.go:334] "Generic (PLEG): container finished" podID="053d4c47-49a2-4bc3-8e17-9d097da24fe8" containerID="ea00e582769c6b71eee1db5078b1347438e6b21a262c46d8d9c8c18573667c74" exitCode=0 Nov 25 08:53:30 crc kubenswrapper[5043]: I1125 08:53:30.729937 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wt55" event={"ID":"053d4c47-49a2-4bc3-8e17-9d097da24fe8","Type":"ContainerDied","Data":"ea00e582769c6b71eee1db5078b1347438e6b21a262c46d8d9c8c18573667c74"} Nov 25 08:53:30 crc kubenswrapper[5043]: I1125 08:53:30.730139 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wt55" event={"ID":"053d4c47-49a2-4bc3-8e17-9d097da24fe8","Type":"ContainerStarted","Data":"d6e9877e54b0ec9b54d36fd228ba5623b1e2fcc0cb326d624b35c30edbbbe9e1"} Nov 25 08:53:32 crc kubenswrapper[5043]: I1125 08:53:32.753027 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wt55" event={"ID":"053d4c47-49a2-4bc3-8e17-9d097da24fe8","Type":"ContainerStarted","Data":"c4931d3fa48c8fafdbf6115a451a04c8c4558227bb2d83aaaf95438af73360a8"} Nov 25 08:53:34 crc kubenswrapper[5043]: I1125 08:53:34.962974 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:53:34 crc kubenswrapper[5043]: E1125 08:53:34.963757 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:53:38 crc kubenswrapper[5043]: I1125 08:53:38.815208 5043 generic.go:334] "Generic (PLEG): container finished" podID="053d4c47-49a2-4bc3-8e17-9d097da24fe8" containerID="c4931d3fa48c8fafdbf6115a451a04c8c4558227bb2d83aaaf95438af73360a8" exitCode=0 Nov 25 08:53:38 crc kubenswrapper[5043]: I1125 08:53:38.815976 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wt55" event={"ID":"053d4c47-49a2-4bc3-8e17-9d097da24fe8","Type":"ContainerDied","Data":"c4931d3fa48c8fafdbf6115a451a04c8c4558227bb2d83aaaf95438af73360a8"} Nov 25 08:53:39 crc kubenswrapper[5043]: I1125 08:53:39.828084 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wt55" event={"ID":"053d4c47-49a2-4bc3-8e17-9d097da24fe8","Type":"ContainerStarted","Data":"3a72b4465836316331652e738beb0391797ac7ef6801578f962954fd22468cce"} Nov 25 08:53:39 crc kubenswrapper[5043]: I1125 08:53:39.853764 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2wt55" podStartSLOduration=2.332416143 podStartE2EDuration="10.853742557s" podCreationTimestamp="2025-11-25 08:53:29 +0000 UTC" firstStartedPulling="2025-11-25 08:53:30.732768623 +0000 UTC m=+5874.900964344" lastFinishedPulling="2025-11-25 08:53:39.254095037 +0000 UTC m=+5883.422290758" observedRunningTime="2025-11-25 08:53:39.849300277 +0000 UTC m=+5884.017496098" watchObservedRunningTime="2025-11-25 08:53:39.853742557 +0000 UTC m=+5884.021938278" Nov 25 08:53:49 crc kubenswrapper[5043]: I1125 08:53:49.704413 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2wt55" Nov 25 08:53:49 crc kubenswrapper[5043]: I1125 08:53:49.704891 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2wt55" Nov 25 08:53:49 crc kubenswrapper[5043]: I1125 08:53:49.963083 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:53:49 crc kubenswrapper[5043]: E1125 08:53:49.963566 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:53:50 crc kubenswrapper[5043]: I1125 08:53:50.754416 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2wt55" podUID="053d4c47-49a2-4bc3-8e17-9d097da24fe8" containerName="registry-server" probeResult="failure" output=< Nov 25 08:53:50 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 08:53:50 crc kubenswrapper[5043]: > Nov 25 08:53:55 crc kubenswrapper[5043]: I1125 08:53:55.086330 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bgsxn"] Nov 25 08:53:55 crc kubenswrapper[5043]: I1125 08:53:55.089990 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgsxn" Nov 25 08:53:55 crc kubenswrapper[5043]: I1125 08:53:55.104918 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bgsxn"] Nov 25 08:53:55 crc kubenswrapper[5043]: I1125 08:53:55.251084 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-catalog-content\") pod \"community-operators-bgsxn\" (UID: \"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a\") " pod="openshift-marketplace/community-operators-bgsxn" Nov 25 08:53:55 crc kubenswrapper[5043]: I1125 08:53:55.251268 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-utilities\") pod \"community-operators-bgsxn\" (UID: \"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a\") " pod="openshift-marketplace/community-operators-bgsxn" Nov 25 08:53:55 crc kubenswrapper[5043]: I1125 08:53:55.251308 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vchz\" (UniqueName: \"kubernetes.io/projected/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-kube-api-access-2vchz\") pod \"community-operators-bgsxn\" (UID: \"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a\") " pod="openshift-marketplace/community-operators-bgsxn" Nov 25 08:53:55 crc kubenswrapper[5043]: I1125 08:53:55.353238 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-utilities\") pod \"community-operators-bgsxn\" (UID: \"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a\") " pod="openshift-marketplace/community-operators-bgsxn" Nov 25 08:53:55 crc kubenswrapper[5043]: I1125 08:53:55.353307 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vchz\" (UniqueName: \"kubernetes.io/projected/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-kube-api-access-2vchz\") pod \"community-operators-bgsxn\" (UID: \"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a\") " pod="openshift-marketplace/community-operators-bgsxn" Nov 25 08:53:55 crc kubenswrapper[5043]: I1125 08:53:55.353379 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-catalog-content\") pod \"community-operators-bgsxn\" (UID: \"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a\") " pod="openshift-marketplace/community-operators-bgsxn" Nov 25 08:53:55 crc kubenswrapper[5043]: I1125 08:53:55.354076 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-catalog-content\") pod \"community-operators-bgsxn\" (UID: \"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a\") " pod="openshift-marketplace/community-operators-bgsxn" Nov 25 08:53:55 crc kubenswrapper[5043]: I1125 08:53:55.354224 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-utilities\") pod \"community-operators-bgsxn\" (UID: \"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a\") " pod="openshift-marketplace/community-operators-bgsxn" Nov 25 08:53:55 crc kubenswrapper[5043]: I1125 08:53:55.384499 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vchz\" (UniqueName: \"kubernetes.io/projected/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-kube-api-access-2vchz\") pod \"community-operators-bgsxn\" (UID: \"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a\") " pod="openshift-marketplace/community-operators-bgsxn" Nov 25 08:53:55 crc kubenswrapper[5043]: I1125 08:53:55.473538 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgsxn" Nov 25 08:53:55 crc kubenswrapper[5043]: I1125 08:53:55.995887 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bgsxn"] Nov 25 08:53:57 crc kubenswrapper[5043]: I1125 08:53:57.011215 5043 generic.go:334] "Generic (PLEG): container finished" podID="8bbb5a98-b06a-4b2f-9bb4-6a347e19995a" containerID="6b743c41071b9f45ce34ae05b5b79cddf66442a26b244436b16fef938be6e6fe" exitCode=0 Nov 25 08:53:57 crc kubenswrapper[5043]: I1125 08:53:57.011640 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgsxn" event={"ID":"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a","Type":"ContainerDied","Data":"6b743c41071b9f45ce34ae05b5b79cddf66442a26b244436b16fef938be6e6fe"} Nov 25 08:53:57 crc kubenswrapper[5043]: I1125 08:53:57.011679 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgsxn" event={"ID":"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a","Type":"ContainerStarted","Data":"6586fce08a2058620fc50d79f4b6693495284662380fd1ebce28d1e3153655bf"} Nov 25 08:53:58 crc kubenswrapper[5043]: I1125 08:53:58.022820 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgsxn" event={"ID":"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a","Type":"ContainerStarted","Data":"8954071a81ae7b6dce0876659b5595ac13d6440b52ee3512470ad5ef3ee05377"} Nov 25 08:53:59 crc kubenswrapper[5043]: I1125 08:53:59.038410 5043 generic.go:334] "Generic (PLEG): container finished" podID="8bbb5a98-b06a-4b2f-9bb4-6a347e19995a" containerID="8954071a81ae7b6dce0876659b5595ac13d6440b52ee3512470ad5ef3ee05377" exitCode=0 Nov 25 08:53:59 crc kubenswrapper[5043]: I1125 08:53:59.038495 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgsxn" event={"ID":"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a","Type":"ContainerDied","Data":"8954071a81ae7b6dce0876659b5595ac13d6440b52ee3512470ad5ef3ee05377"} Nov 25 08:54:00 crc kubenswrapper[5043]: I1125 08:54:00.053213 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgsxn" event={"ID":"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a","Type":"ContainerStarted","Data":"01124b2cc72a1f8d248a8fa20bcc07ddfea4328ce6af8aaa330eb6cf1fa2d4a8"} Nov 25 08:54:00 crc kubenswrapper[5043]: I1125 08:54:00.103935 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bgsxn" podStartSLOduration=2.607046509 podStartE2EDuration="5.103911367s" podCreationTimestamp="2025-11-25 08:53:55 +0000 UTC" firstStartedPulling="2025-11-25 08:53:57.014270264 +0000 UTC m=+5901.182465995" lastFinishedPulling="2025-11-25 08:53:59.511135122 +0000 UTC m=+5903.679330853" observedRunningTime="2025-11-25 08:54:00.087345852 +0000 UTC m=+5904.255541573" watchObservedRunningTime="2025-11-25 08:54:00.103911367 +0000 UTC m=+5904.272107108" Nov 25 08:54:00 crc kubenswrapper[5043]: I1125 08:54:00.762102 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2wt55" podUID="053d4c47-49a2-4bc3-8e17-9d097da24fe8" containerName="registry-server" probeResult="failure" output=< Nov 25 08:54:00 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 08:54:00 crc kubenswrapper[5043]: > Nov 25 08:54:03 crc kubenswrapper[5043]: I1125 08:54:03.962936 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:54:03 crc kubenswrapper[5043]: E1125 08:54:03.963738 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:54:05 crc kubenswrapper[5043]: I1125 08:54:05.474334 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bgsxn" Nov 25 08:54:05 crc kubenswrapper[5043]: I1125 08:54:05.474979 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bgsxn" Nov 25 08:54:06 crc kubenswrapper[5043]: I1125 08:54:06.527241 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bgsxn" podUID="8bbb5a98-b06a-4b2f-9bb4-6a347e19995a" containerName="registry-server" probeResult="failure" output=< Nov 25 08:54:06 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 08:54:06 crc kubenswrapper[5043]: > Nov 25 08:54:10 crc kubenswrapper[5043]: I1125 08:54:10.771225 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2wt55" podUID="053d4c47-49a2-4bc3-8e17-9d097da24fe8" containerName="registry-server" probeResult="failure" output=< Nov 25 08:54:10 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 08:54:10 crc kubenswrapper[5043]: > Nov 25 08:54:15 crc kubenswrapper[5043]: I1125 08:54:15.546481 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bgsxn" Nov 25 08:54:15 crc kubenswrapper[5043]: I1125 08:54:15.622403 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bgsxn" Nov 25 08:54:15 crc kubenswrapper[5043]: I1125 08:54:15.797474 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bgsxn"] Nov 25 08:54:15 crc kubenswrapper[5043]: I1125 08:54:15.963865 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:54:15 crc kubenswrapper[5043]: E1125 08:54:15.964234 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:54:17 crc kubenswrapper[5043]: I1125 08:54:17.238828 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bgsxn" podUID="8bbb5a98-b06a-4b2f-9bb4-6a347e19995a" containerName="registry-server" containerID="cri-o://01124b2cc72a1f8d248a8fa20bcc07ddfea4328ce6af8aaa330eb6cf1fa2d4a8" gracePeriod=2 Nov 25 08:54:17 crc kubenswrapper[5043]: I1125 08:54:17.808900 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgsxn" Nov 25 08:54:17 crc kubenswrapper[5043]: I1125 08:54:17.955207 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-catalog-content\") pod \"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a\" (UID: \"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a\") " Nov 25 08:54:17 crc kubenswrapper[5043]: I1125 08:54:17.955325 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vchz\" (UniqueName: \"kubernetes.io/projected/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-kube-api-access-2vchz\") pod \"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a\" (UID: \"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a\") " Nov 25 08:54:17 crc kubenswrapper[5043]: I1125 08:54:17.955400 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-utilities\") pod \"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a\" (UID: \"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a\") " Nov 25 08:54:17 crc kubenswrapper[5043]: I1125 08:54:17.956120 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-utilities" (OuterVolumeSpecName: "utilities") pod "8bbb5a98-b06a-4b2f-9bb4-6a347e19995a" (UID: "8bbb5a98-b06a-4b2f-9bb4-6a347e19995a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:54:17 crc kubenswrapper[5043]: I1125 08:54:17.956418 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:54:17 crc kubenswrapper[5043]: I1125 08:54:17.964307 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-kube-api-access-2vchz" (OuterVolumeSpecName: "kube-api-access-2vchz") pod "8bbb5a98-b06a-4b2f-9bb4-6a347e19995a" (UID: "8bbb5a98-b06a-4b2f-9bb4-6a347e19995a"). InnerVolumeSpecName "kube-api-access-2vchz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.017388 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bbb5a98-b06a-4b2f-9bb4-6a347e19995a" (UID: "8bbb5a98-b06a-4b2f-9bb4-6a347e19995a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.058915 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.058971 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vchz\" (UniqueName: \"kubernetes.io/projected/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a-kube-api-access-2vchz\") on node \"crc\" DevicePath \"\"" Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.247649 5043 generic.go:334] "Generic (PLEG): container finished" podID="8bbb5a98-b06a-4b2f-9bb4-6a347e19995a" containerID="01124b2cc72a1f8d248a8fa20bcc07ddfea4328ce6af8aaa330eb6cf1fa2d4a8" exitCode=0 Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.247689 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgsxn" event={"ID":"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a","Type":"ContainerDied","Data":"01124b2cc72a1f8d248a8fa20bcc07ddfea4328ce6af8aaa330eb6cf1fa2d4a8"} Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.247713 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgsxn" event={"ID":"8bbb5a98-b06a-4b2f-9bb4-6a347e19995a","Type":"ContainerDied","Data":"6586fce08a2058620fc50d79f4b6693495284662380fd1ebce28d1e3153655bf"} Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.247731 5043 scope.go:117] "RemoveContainer" containerID="01124b2cc72a1f8d248a8fa20bcc07ddfea4328ce6af8aaa330eb6cf1fa2d4a8" Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.249091 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgsxn" Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.270053 5043 scope.go:117] "RemoveContainer" containerID="8954071a81ae7b6dce0876659b5595ac13d6440b52ee3512470ad5ef3ee05377" Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.310802 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bgsxn"] Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.323916 5043 scope.go:117] "RemoveContainer" containerID="6b743c41071b9f45ce34ae05b5b79cddf66442a26b244436b16fef938be6e6fe" Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.324664 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bgsxn"] Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.371345 5043 scope.go:117] "RemoveContainer" containerID="01124b2cc72a1f8d248a8fa20bcc07ddfea4328ce6af8aaa330eb6cf1fa2d4a8" Nov 25 08:54:18 crc kubenswrapper[5043]: E1125 08:54:18.371935 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01124b2cc72a1f8d248a8fa20bcc07ddfea4328ce6af8aaa330eb6cf1fa2d4a8\": container with ID starting with 01124b2cc72a1f8d248a8fa20bcc07ddfea4328ce6af8aaa330eb6cf1fa2d4a8 not found: ID does not exist" containerID="01124b2cc72a1f8d248a8fa20bcc07ddfea4328ce6af8aaa330eb6cf1fa2d4a8" Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.372063 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01124b2cc72a1f8d248a8fa20bcc07ddfea4328ce6af8aaa330eb6cf1fa2d4a8"} err="failed to get container status \"01124b2cc72a1f8d248a8fa20bcc07ddfea4328ce6af8aaa330eb6cf1fa2d4a8\": rpc error: code = NotFound desc = could not find container \"01124b2cc72a1f8d248a8fa20bcc07ddfea4328ce6af8aaa330eb6cf1fa2d4a8\": container with ID starting with 01124b2cc72a1f8d248a8fa20bcc07ddfea4328ce6af8aaa330eb6cf1fa2d4a8 not found: ID does not exist" Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.372178 5043 scope.go:117] "RemoveContainer" containerID="8954071a81ae7b6dce0876659b5595ac13d6440b52ee3512470ad5ef3ee05377" Nov 25 08:54:18 crc kubenswrapper[5043]: E1125 08:54:18.372516 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8954071a81ae7b6dce0876659b5595ac13d6440b52ee3512470ad5ef3ee05377\": container with ID starting with 8954071a81ae7b6dce0876659b5595ac13d6440b52ee3512470ad5ef3ee05377 not found: ID does not exist" containerID="8954071a81ae7b6dce0876659b5595ac13d6440b52ee3512470ad5ef3ee05377" Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.372546 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8954071a81ae7b6dce0876659b5595ac13d6440b52ee3512470ad5ef3ee05377"} err="failed to get container status \"8954071a81ae7b6dce0876659b5595ac13d6440b52ee3512470ad5ef3ee05377\": rpc error: code = NotFound desc = could not find container \"8954071a81ae7b6dce0876659b5595ac13d6440b52ee3512470ad5ef3ee05377\": container with ID starting with 8954071a81ae7b6dce0876659b5595ac13d6440b52ee3512470ad5ef3ee05377 not found: ID does not exist" Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.372564 5043 scope.go:117] "RemoveContainer" containerID="6b743c41071b9f45ce34ae05b5b79cddf66442a26b244436b16fef938be6e6fe" Nov 25 08:54:18 crc kubenswrapper[5043]: E1125 08:54:18.372768 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b743c41071b9f45ce34ae05b5b79cddf66442a26b244436b16fef938be6e6fe\": container with ID starting with 6b743c41071b9f45ce34ae05b5b79cddf66442a26b244436b16fef938be6e6fe not found: ID does not exist" containerID="6b743c41071b9f45ce34ae05b5b79cddf66442a26b244436b16fef938be6e6fe" Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.372792 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b743c41071b9f45ce34ae05b5b79cddf66442a26b244436b16fef938be6e6fe"} err="failed to get container status \"6b743c41071b9f45ce34ae05b5b79cddf66442a26b244436b16fef938be6e6fe\": rpc error: code = NotFound desc = could not find container \"6b743c41071b9f45ce34ae05b5b79cddf66442a26b244436b16fef938be6e6fe\": container with ID starting with 6b743c41071b9f45ce34ae05b5b79cddf66442a26b244436b16fef938be6e6fe not found: ID does not exist" Nov 25 08:54:18 crc kubenswrapper[5043]: I1125 08:54:18.979818 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bbb5a98-b06a-4b2f-9bb4-6a347e19995a" path="/var/lib/kubelet/pods/8bbb5a98-b06a-4b2f-9bb4-6a347e19995a/volumes" Nov 25 08:54:19 crc kubenswrapper[5043]: I1125 08:54:19.758061 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2wt55" Nov 25 08:54:19 crc kubenswrapper[5043]: I1125 08:54:19.811103 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2wt55" Nov 25 08:54:20 crc kubenswrapper[5043]: I1125 08:54:20.191184 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2wt55"] Nov 25 08:54:21 crc kubenswrapper[5043]: I1125 08:54:21.277091 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2wt55" podUID="053d4c47-49a2-4bc3-8e17-9d097da24fe8" containerName="registry-server" containerID="cri-o://3a72b4465836316331652e738beb0391797ac7ef6801578f962954fd22468cce" gracePeriod=2 Nov 25 08:54:21 crc kubenswrapper[5043]: I1125 08:54:21.876962 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wt55" Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.045982 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l2h6\" (UniqueName: \"kubernetes.io/projected/053d4c47-49a2-4bc3-8e17-9d097da24fe8-kube-api-access-5l2h6\") pod \"053d4c47-49a2-4bc3-8e17-9d097da24fe8\" (UID: \"053d4c47-49a2-4bc3-8e17-9d097da24fe8\") " Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.046099 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/053d4c47-49a2-4bc3-8e17-9d097da24fe8-catalog-content\") pod \"053d4c47-49a2-4bc3-8e17-9d097da24fe8\" (UID: \"053d4c47-49a2-4bc3-8e17-9d097da24fe8\") " Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.046932 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/053d4c47-49a2-4bc3-8e17-9d097da24fe8-utilities\") pod \"053d4c47-49a2-4bc3-8e17-9d097da24fe8\" (UID: \"053d4c47-49a2-4bc3-8e17-9d097da24fe8\") " Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.048292 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/053d4c47-49a2-4bc3-8e17-9d097da24fe8-utilities" (OuterVolumeSpecName: "utilities") pod "053d4c47-49a2-4bc3-8e17-9d097da24fe8" (UID: "053d4c47-49a2-4bc3-8e17-9d097da24fe8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.052357 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/053d4c47-49a2-4bc3-8e17-9d097da24fe8-kube-api-access-5l2h6" (OuterVolumeSpecName: "kube-api-access-5l2h6") pod "053d4c47-49a2-4bc3-8e17-9d097da24fe8" (UID: "053d4c47-49a2-4bc3-8e17-9d097da24fe8"). InnerVolumeSpecName "kube-api-access-5l2h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.141164 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/053d4c47-49a2-4bc3-8e17-9d097da24fe8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "053d4c47-49a2-4bc3-8e17-9d097da24fe8" (UID: "053d4c47-49a2-4bc3-8e17-9d097da24fe8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.150357 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l2h6\" (UniqueName: \"kubernetes.io/projected/053d4c47-49a2-4bc3-8e17-9d097da24fe8-kube-api-access-5l2h6\") on node \"crc\" DevicePath \"\"" Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.150447 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/053d4c47-49a2-4bc3-8e17-9d097da24fe8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.150459 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/053d4c47-49a2-4bc3-8e17-9d097da24fe8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.288241 5043 generic.go:334] "Generic (PLEG): container finished" podID="053d4c47-49a2-4bc3-8e17-9d097da24fe8" containerID="3a72b4465836316331652e738beb0391797ac7ef6801578f962954fd22468cce" exitCode=0 Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.288281 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wt55" event={"ID":"053d4c47-49a2-4bc3-8e17-9d097da24fe8","Type":"ContainerDied","Data":"3a72b4465836316331652e738beb0391797ac7ef6801578f962954fd22468cce"} Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.288306 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wt55" event={"ID":"053d4c47-49a2-4bc3-8e17-9d097da24fe8","Type":"ContainerDied","Data":"d6e9877e54b0ec9b54d36fd228ba5623b1e2fcc0cb326d624b35c30edbbbe9e1"} Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.288322 5043 scope.go:117] "RemoveContainer" containerID="3a72b4465836316331652e738beb0391797ac7ef6801578f962954fd22468cce" Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.288468 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wt55" Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.322641 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2wt55"] Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.323340 5043 scope.go:117] "RemoveContainer" containerID="c4931d3fa48c8fafdbf6115a451a04c8c4558227bb2d83aaaf95438af73360a8" Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.342201 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2wt55"] Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.348487 5043 scope.go:117] "RemoveContainer" containerID="ea00e582769c6b71eee1db5078b1347438e6b21a262c46d8d9c8c18573667c74" Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.393595 5043 scope.go:117] "RemoveContainer" containerID="3a72b4465836316331652e738beb0391797ac7ef6801578f962954fd22468cce" Nov 25 08:54:22 crc kubenswrapper[5043]: E1125 08:54:22.394141 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a72b4465836316331652e738beb0391797ac7ef6801578f962954fd22468cce\": container with ID starting with 3a72b4465836316331652e738beb0391797ac7ef6801578f962954fd22468cce not found: ID does not exist" containerID="3a72b4465836316331652e738beb0391797ac7ef6801578f962954fd22468cce" Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.394177 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a72b4465836316331652e738beb0391797ac7ef6801578f962954fd22468cce"} err="failed to get container status \"3a72b4465836316331652e738beb0391797ac7ef6801578f962954fd22468cce\": rpc error: code = NotFound desc = could not find container \"3a72b4465836316331652e738beb0391797ac7ef6801578f962954fd22468cce\": container with ID starting with 3a72b4465836316331652e738beb0391797ac7ef6801578f962954fd22468cce not found: ID does not exist" Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.394200 5043 scope.go:117] "RemoveContainer" containerID="c4931d3fa48c8fafdbf6115a451a04c8c4558227bb2d83aaaf95438af73360a8" Nov 25 08:54:22 crc kubenswrapper[5043]: E1125 08:54:22.394561 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4931d3fa48c8fafdbf6115a451a04c8c4558227bb2d83aaaf95438af73360a8\": container with ID starting with c4931d3fa48c8fafdbf6115a451a04c8c4558227bb2d83aaaf95438af73360a8 not found: ID does not exist" containerID="c4931d3fa48c8fafdbf6115a451a04c8c4558227bb2d83aaaf95438af73360a8" Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.394583 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4931d3fa48c8fafdbf6115a451a04c8c4558227bb2d83aaaf95438af73360a8"} err="failed to get container status \"c4931d3fa48c8fafdbf6115a451a04c8c4558227bb2d83aaaf95438af73360a8\": rpc error: code = NotFound desc = could not find container \"c4931d3fa48c8fafdbf6115a451a04c8c4558227bb2d83aaaf95438af73360a8\": container with ID starting with c4931d3fa48c8fafdbf6115a451a04c8c4558227bb2d83aaaf95438af73360a8 not found: ID does not exist" Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.394617 5043 scope.go:117] "RemoveContainer" containerID="ea00e582769c6b71eee1db5078b1347438e6b21a262c46d8d9c8c18573667c74" Nov 25 08:54:22 crc kubenswrapper[5043]: E1125 08:54:22.394887 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea00e582769c6b71eee1db5078b1347438e6b21a262c46d8d9c8c18573667c74\": container with ID starting with ea00e582769c6b71eee1db5078b1347438e6b21a262c46d8d9c8c18573667c74 not found: ID does not exist" containerID="ea00e582769c6b71eee1db5078b1347438e6b21a262c46d8d9c8c18573667c74" Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.394908 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea00e582769c6b71eee1db5078b1347438e6b21a262c46d8d9c8c18573667c74"} err="failed to get container status \"ea00e582769c6b71eee1db5078b1347438e6b21a262c46d8d9c8c18573667c74\": rpc error: code = NotFound desc = could not find container \"ea00e582769c6b71eee1db5078b1347438e6b21a262c46d8d9c8c18573667c74\": container with ID starting with ea00e582769c6b71eee1db5078b1347438e6b21a262c46d8d9c8c18573667c74 not found: ID does not exist" Nov 25 08:54:22 crc kubenswrapper[5043]: I1125 08:54:22.975866 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="053d4c47-49a2-4bc3-8e17-9d097da24fe8" path="/var/lib/kubelet/pods/053d4c47-49a2-4bc3-8e17-9d097da24fe8/volumes" Nov 25 08:54:27 crc kubenswrapper[5043]: I1125 08:54:27.964080 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:54:27 crc kubenswrapper[5043]: E1125 08:54:27.965236 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:54:40 crc kubenswrapper[5043]: I1125 08:54:40.962981 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:54:40 crc kubenswrapper[5043]: E1125 08:54:40.963971 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:54:52 crc kubenswrapper[5043]: I1125 08:54:52.963634 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:54:52 crc kubenswrapper[5043]: E1125 08:54:52.965086 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:55:03 crc kubenswrapper[5043]: I1125 08:55:03.962643 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:55:03 crc kubenswrapper[5043]: E1125 08:55:03.963564 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 08:55:18 crc kubenswrapper[5043]: I1125 08:55:18.120570 5043 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-vl25g" podUID="d14cb4f9-dc65-4999-833a-475d3f735715" containerName="registry-server" probeResult="failure" output=< Nov 25 08:55:18 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 08:55:18 crc kubenswrapper[5043]: > Nov 25 08:55:18 crc kubenswrapper[5043]: I1125 08:55:18.963546 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 08:55:19 crc kubenswrapper[5043]: I1125 08:55:19.730217 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"b91dcbdf58dc44451ef7983d0342c37bd2fb59647c4740555fd701bf9df87e89"} Nov 25 08:57:47 crc kubenswrapper[5043]: I1125 08:57:47.275770 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:57:47 crc kubenswrapper[5043]: I1125 08:57:47.276269 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.657871 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rcrj9"] Nov 25 08:58:16 crc kubenswrapper[5043]: E1125 08:58:16.659181 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="053d4c47-49a2-4bc3-8e17-9d097da24fe8" containerName="extract-content" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.659206 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="053d4c47-49a2-4bc3-8e17-9d097da24fe8" containerName="extract-content" Nov 25 08:58:16 crc kubenswrapper[5043]: E1125 08:58:16.659244 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbb5a98-b06a-4b2f-9bb4-6a347e19995a" containerName="extract-content" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.659257 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbb5a98-b06a-4b2f-9bb4-6a347e19995a" containerName="extract-content" Nov 25 08:58:16 crc kubenswrapper[5043]: E1125 08:58:16.659290 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbb5a98-b06a-4b2f-9bb4-6a347e19995a" containerName="registry-server" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.659301 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbb5a98-b06a-4b2f-9bb4-6a347e19995a" containerName="registry-server" Nov 25 08:58:16 crc kubenswrapper[5043]: E1125 08:58:16.659322 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="053d4c47-49a2-4bc3-8e17-9d097da24fe8" containerName="registry-server" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.659334 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="053d4c47-49a2-4bc3-8e17-9d097da24fe8" containerName="registry-server" Nov 25 08:58:16 crc kubenswrapper[5043]: E1125 08:58:16.659357 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbb5a98-b06a-4b2f-9bb4-6a347e19995a" containerName="extract-utilities" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.659369 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbb5a98-b06a-4b2f-9bb4-6a347e19995a" containerName="extract-utilities" Nov 25 08:58:16 crc kubenswrapper[5043]: E1125 08:58:16.659400 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="053d4c47-49a2-4bc3-8e17-9d097da24fe8" containerName="extract-utilities" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.659412 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="053d4c47-49a2-4bc3-8e17-9d097da24fe8" containerName="extract-utilities" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.659788 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bbb5a98-b06a-4b2f-9bb4-6a347e19995a" containerName="registry-server" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.659819 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="053d4c47-49a2-4bc3-8e17-9d097da24fe8" containerName="registry-server" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.664797 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rcrj9" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.676933 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcrj9"] Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.769860 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-catalog-content\") pod \"redhat-marketplace-rcrj9\" (UID: \"d4a4385e-d131-4535-ad5b-cbb4c0458cc5\") " pod="openshift-marketplace/redhat-marketplace-rcrj9" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.770341 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5q59\" (UniqueName: \"kubernetes.io/projected/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-kube-api-access-w5q59\") pod \"redhat-marketplace-rcrj9\" (UID: \"d4a4385e-d131-4535-ad5b-cbb4c0458cc5\") " pod="openshift-marketplace/redhat-marketplace-rcrj9" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.770469 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-utilities\") pod \"redhat-marketplace-rcrj9\" (UID: \"d4a4385e-d131-4535-ad5b-cbb4c0458cc5\") " pod="openshift-marketplace/redhat-marketplace-rcrj9" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.872634 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5q59\" (UniqueName: \"kubernetes.io/projected/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-kube-api-access-w5q59\") pod \"redhat-marketplace-rcrj9\" (UID: \"d4a4385e-d131-4535-ad5b-cbb4c0458cc5\") " pod="openshift-marketplace/redhat-marketplace-rcrj9" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.872981 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-utilities\") pod \"redhat-marketplace-rcrj9\" (UID: \"d4a4385e-d131-4535-ad5b-cbb4c0458cc5\") " pod="openshift-marketplace/redhat-marketplace-rcrj9" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.873049 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-catalog-content\") pod \"redhat-marketplace-rcrj9\" (UID: \"d4a4385e-d131-4535-ad5b-cbb4c0458cc5\") " pod="openshift-marketplace/redhat-marketplace-rcrj9" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.873757 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-utilities\") pod \"redhat-marketplace-rcrj9\" (UID: \"d4a4385e-d131-4535-ad5b-cbb4c0458cc5\") " pod="openshift-marketplace/redhat-marketplace-rcrj9" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.873785 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-catalog-content\") pod \"redhat-marketplace-rcrj9\" (UID: \"d4a4385e-d131-4535-ad5b-cbb4c0458cc5\") " pod="openshift-marketplace/redhat-marketplace-rcrj9" Nov 25 08:58:16 crc kubenswrapper[5043]: I1125 08:58:16.890535 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5q59\" (UniqueName: \"kubernetes.io/projected/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-kube-api-access-w5q59\") pod \"redhat-marketplace-rcrj9\" (UID: \"d4a4385e-d131-4535-ad5b-cbb4c0458cc5\") " pod="openshift-marketplace/redhat-marketplace-rcrj9" Nov 25 08:58:17 crc kubenswrapper[5043]: I1125 08:58:17.094792 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rcrj9" Nov 25 08:58:17 crc kubenswrapper[5043]: I1125 08:58:17.276666 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:58:17 crc kubenswrapper[5043]: I1125 08:58:17.276839 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:58:17 crc kubenswrapper[5043]: I1125 08:58:17.588969 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcrj9"] Nov 25 08:58:17 crc kubenswrapper[5043]: I1125 08:58:17.798790 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcrj9" event={"ID":"d4a4385e-d131-4535-ad5b-cbb4c0458cc5","Type":"ContainerStarted","Data":"5d2f2b41386737693591ed9f67a2dcc0b20d9522c9bc3898a646bcc91274ccac"} Nov 25 08:58:18 crc kubenswrapper[5043]: I1125 08:58:18.814860 5043 generic.go:334] "Generic (PLEG): container finished" podID="d4a4385e-d131-4535-ad5b-cbb4c0458cc5" containerID="15088e84ab3bb0f4983cea7589bacfb927d5fe1305e4d3c28d89ff1f6acbfcdf" exitCode=0 Nov 25 08:58:18 crc kubenswrapper[5043]: I1125 08:58:18.816342 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcrj9" event={"ID":"d4a4385e-d131-4535-ad5b-cbb4c0458cc5","Type":"ContainerDied","Data":"15088e84ab3bb0f4983cea7589bacfb927d5fe1305e4d3c28d89ff1f6acbfcdf"} Nov 25 08:58:18 crc kubenswrapper[5043]: I1125 08:58:18.819079 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 08:58:19 crc kubenswrapper[5043]: I1125 08:58:19.827101 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcrj9" event={"ID":"d4a4385e-d131-4535-ad5b-cbb4c0458cc5","Type":"ContainerStarted","Data":"a2a7203de90618919c1175fb50f62dc4c5a250b8a060ee4a9914b1071d476c30"} Nov 25 08:58:20 crc kubenswrapper[5043]: I1125 08:58:20.839689 5043 generic.go:334] "Generic (PLEG): container finished" podID="d4a4385e-d131-4535-ad5b-cbb4c0458cc5" containerID="a2a7203de90618919c1175fb50f62dc4c5a250b8a060ee4a9914b1071d476c30" exitCode=0 Nov 25 08:58:20 crc kubenswrapper[5043]: I1125 08:58:20.839770 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcrj9" event={"ID":"d4a4385e-d131-4535-ad5b-cbb4c0458cc5","Type":"ContainerDied","Data":"a2a7203de90618919c1175fb50f62dc4c5a250b8a060ee4a9914b1071d476c30"} Nov 25 08:58:21 crc kubenswrapper[5043]: I1125 08:58:21.851899 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcrj9" event={"ID":"d4a4385e-d131-4535-ad5b-cbb4c0458cc5","Type":"ContainerStarted","Data":"126b49c651ab93b9ef294b9cc2683ee227ff8cb89313a66bd2291667c445532f"} Nov 25 08:58:21 crc kubenswrapper[5043]: I1125 08:58:21.874260 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rcrj9" podStartSLOduration=3.404394403 podStartE2EDuration="5.874227477s" podCreationTimestamp="2025-11-25 08:58:16 +0000 UTC" firstStartedPulling="2025-11-25 08:58:18.81868303 +0000 UTC m=+6162.986878791" lastFinishedPulling="2025-11-25 08:58:21.288516114 +0000 UTC m=+6165.456711865" observedRunningTime="2025-11-25 08:58:21.868653164 +0000 UTC m=+6166.036848895" watchObservedRunningTime="2025-11-25 08:58:21.874227477 +0000 UTC m=+6166.042423238" Nov 25 08:58:27 crc kubenswrapper[5043]: I1125 08:58:27.095755 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rcrj9" Nov 25 08:58:27 crc kubenswrapper[5043]: I1125 08:58:27.096388 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rcrj9" Nov 25 08:58:27 crc kubenswrapper[5043]: I1125 08:58:27.180076 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rcrj9" Nov 25 08:58:28 crc kubenswrapper[5043]: I1125 08:58:28.022555 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rcrj9" Nov 25 08:58:28 crc kubenswrapper[5043]: I1125 08:58:28.083543 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcrj9"] Nov 25 08:58:29 crc kubenswrapper[5043]: I1125 08:58:29.955046 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rcrj9" podUID="d4a4385e-d131-4535-ad5b-cbb4c0458cc5" containerName="registry-server" containerID="cri-o://126b49c651ab93b9ef294b9cc2683ee227ff8cb89313a66bd2291667c445532f" gracePeriod=2 Nov 25 08:58:30 crc kubenswrapper[5043]: I1125 08:58:30.629277 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rcrj9" Nov 25 08:58:30 crc kubenswrapper[5043]: I1125 08:58:30.810286 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-catalog-content\") pod \"d4a4385e-d131-4535-ad5b-cbb4c0458cc5\" (UID: \"d4a4385e-d131-4535-ad5b-cbb4c0458cc5\") " Nov 25 08:58:30 crc kubenswrapper[5043]: I1125 08:58:30.813862 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-utilities\") pod \"d4a4385e-d131-4535-ad5b-cbb4c0458cc5\" (UID: \"d4a4385e-d131-4535-ad5b-cbb4c0458cc5\") " Nov 25 08:58:30 crc kubenswrapper[5043]: I1125 08:58:30.813980 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5q59\" (UniqueName: \"kubernetes.io/projected/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-kube-api-access-w5q59\") pod \"d4a4385e-d131-4535-ad5b-cbb4c0458cc5\" (UID: \"d4a4385e-d131-4535-ad5b-cbb4c0458cc5\") " Nov 25 08:58:30 crc kubenswrapper[5043]: I1125 08:58:30.815012 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-utilities" (OuterVolumeSpecName: "utilities") pod "d4a4385e-d131-4535-ad5b-cbb4c0458cc5" (UID: "d4a4385e-d131-4535-ad5b-cbb4c0458cc5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:58:30 crc kubenswrapper[5043]: I1125 08:58:30.821772 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-kube-api-access-w5q59" (OuterVolumeSpecName: "kube-api-access-w5q59") pod "d4a4385e-d131-4535-ad5b-cbb4c0458cc5" (UID: "d4a4385e-d131-4535-ad5b-cbb4c0458cc5"). InnerVolumeSpecName "kube-api-access-w5q59". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 08:58:30 crc kubenswrapper[5043]: I1125 08:58:30.830802 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4a4385e-d131-4535-ad5b-cbb4c0458cc5" (UID: "d4a4385e-d131-4535-ad5b-cbb4c0458cc5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 08:58:30 crc kubenswrapper[5043]: I1125 08:58:30.916515 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 08:58:30 crc kubenswrapper[5043]: I1125 08:58:30.916558 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 08:58:30 crc kubenswrapper[5043]: I1125 08:58:30.916568 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5q59\" (UniqueName: \"kubernetes.io/projected/d4a4385e-d131-4535-ad5b-cbb4c0458cc5-kube-api-access-w5q59\") on node \"crc\" DevicePath \"\"" Nov 25 08:58:30 crc kubenswrapper[5043]: I1125 08:58:30.967467 5043 generic.go:334] "Generic (PLEG): container finished" podID="d4a4385e-d131-4535-ad5b-cbb4c0458cc5" containerID="126b49c651ab93b9ef294b9cc2683ee227ff8cb89313a66bd2291667c445532f" exitCode=0 Nov 25 08:58:30 crc kubenswrapper[5043]: I1125 08:58:30.967566 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rcrj9" Nov 25 08:58:30 crc kubenswrapper[5043]: I1125 08:58:30.973661 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcrj9" event={"ID":"d4a4385e-d131-4535-ad5b-cbb4c0458cc5","Type":"ContainerDied","Data":"126b49c651ab93b9ef294b9cc2683ee227ff8cb89313a66bd2291667c445532f"} Nov 25 08:58:30 crc kubenswrapper[5043]: I1125 08:58:30.973701 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rcrj9" event={"ID":"d4a4385e-d131-4535-ad5b-cbb4c0458cc5","Type":"ContainerDied","Data":"5d2f2b41386737693591ed9f67a2dcc0b20d9522c9bc3898a646bcc91274ccac"} Nov 25 08:58:30 crc kubenswrapper[5043]: I1125 08:58:30.973718 5043 scope.go:117] "RemoveContainer" containerID="126b49c651ab93b9ef294b9cc2683ee227ff8cb89313a66bd2291667c445532f" Nov 25 08:58:31 crc kubenswrapper[5043]: I1125 08:58:31.008247 5043 scope.go:117] "RemoveContainer" containerID="a2a7203de90618919c1175fb50f62dc4c5a250b8a060ee4a9914b1071d476c30" Nov 25 08:58:31 crc kubenswrapper[5043]: I1125 08:58:31.010359 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcrj9"] Nov 25 08:58:31 crc kubenswrapper[5043]: I1125 08:58:31.021049 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rcrj9"] Nov 25 08:58:31 crc kubenswrapper[5043]: I1125 08:58:31.050062 5043 scope.go:117] "RemoveContainer" containerID="15088e84ab3bb0f4983cea7589bacfb927d5fe1305e4d3c28d89ff1f6acbfcdf" Nov 25 08:58:31 crc kubenswrapper[5043]: I1125 08:58:31.092554 5043 scope.go:117] "RemoveContainer" containerID="126b49c651ab93b9ef294b9cc2683ee227ff8cb89313a66bd2291667c445532f" Nov 25 08:58:31 crc kubenswrapper[5043]: E1125 08:58:31.093013 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"126b49c651ab93b9ef294b9cc2683ee227ff8cb89313a66bd2291667c445532f\": container with ID starting with 126b49c651ab93b9ef294b9cc2683ee227ff8cb89313a66bd2291667c445532f not found: ID does not exist" containerID="126b49c651ab93b9ef294b9cc2683ee227ff8cb89313a66bd2291667c445532f" Nov 25 08:58:31 crc kubenswrapper[5043]: I1125 08:58:31.093043 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126b49c651ab93b9ef294b9cc2683ee227ff8cb89313a66bd2291667c445532f"} err="failed to get container status \"126b49c651ab93b9ef294b9cc2683ee227ff8cb89313a66bd2291667c445532f\": rpc error: code = NotFound desc = could not find container \"126b49c651ab93b9ef294b9cc2683ee227ff8cb89313a66bd2291667c445532f\": container with ID starting with 126b49c651ab93b9ef294b9cc2683ee227ff8cb89313a66bd2291667c445532f not found: ID does not exist" Nov 25 08:58:31 crc kubenswrapper[5043]: I1125 08:58:31.093064 5043 scope.go:117] "RemoveContainer" containerID="a2a7203de90618919c1175fb50f62dc4c5a250b8a060ee4a9914b1071d476c30" Nov 25 08:58:31 crc kubenswrapper[5043]: E1125 08:58:31.093334 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2a7203de90618919c1175fb50f62dc4c5a250b8a060ee4a9914b1071d476c30\": container with ID starting with a2a7203de90618919c1175fb50f62dc4c5a250b8a060ee4a9914b1071d476c30 not found: ID does not exist" containerID="a2a7203de90618919c1175fb50f62dc4c5a250b8a060ee4a9914b1071d476c30" Nov 25 08:58:31 crc kubenswrapper[5043]: I1125 08:58:31.093356 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2a7203de90618919c1175fb50f62dc4c5a250b8a060ee4a9914b1071d476c30"} err="failed to get container status \"a2a7203de90618919c1175fb50f62dc4c5a250b8a060ee4a9914b1071d476c30\": rpc error: code = NotFound desc = could not find container \"a2a7203de90618919c1175fb50f62dc4c5a250b8a060ee4a9914b1071d476c30\": container with ID starting with a2a7203de90618919c1175fb50f62dc4c5a250b8a060ee4a9914b1071d476c30 not found: ID does not exist" Nov 25 08:58:31 crc kubenswrapper[5043]: I1125 08:58:31.093370 5043 scope.go:117] "RemoveContainer" containerID="15088e84ab3bb0f4983cea7589bacfb927d5fe1305e4d3c28d89ff1f6acbfcdf" Nov 25 08:58:31 crc kubenswrapper[5043]: E1125 08:58:31.093800 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15088e84ab3bb0f4983cea7589bacfb927d5fe1305e4d3c28d89ff1f6acbfcdf\": container with ID starting with 15088e84ab3bb0f4983cea7589bacfb927d5fe1305e4d3c28d89ff1f6acbfcdf not found: ID does not exist" containerID="15088e84ab3bb0f4983cea7589bacfb927d5fe1305e4d3c28d89ff1f6acbfcdf" Nov 25 08:58:31 crc kubenswrapper[5043]: I1125 08:58:31.093833 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15088e84ab3bb0f4983cea7589bacfb927d5fe1305e4d3c28d89ff1f6acbfcdf"} err="failed to get container status \"15088e84ab3bb0f4983cea7589bacfb927d5fe1305e4d3c28d89ff1f6acbfcdf\": rpc error: code = NotFound desc = could not find container \"15088e84ab3bb0f4983cea7589bacfb927d5fe1305e4d3c28d89ff1f6acbfcdf\": container with ID starting with 15088e84ab3bb0f4983cea7589bacfb927d5fe1305e4d3c28d89ff1f6acbfcdf not found: ID does not exist" Nov 25 08:58:32 crc kubenswrapper[5043]: I1125 08:58:32.981315 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4a4385e-d131-4535-ad5b-cbb4c0458cc5" path="/var/lib/kubelet/pods/d4a4385e-d131-4535-ad5b-cbb4c0458cc5/volumes" Nov 25 08:58:47 crc kubenswrapper[5043]: I1125 08:58:47.276117 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 08:58:47 crc kubenswrapper[5043]: I1125 08:58:47.276802 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 08:58:47 crc kubenswrapper[5043]: I1125 08:58:47.276876 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 08:58:47 crc kubenswrapper[5043]: I1125 08:58:47.277957 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b91dcbdf58dc44451ef7983d0342c37bd2fb59647c4740555fd701bf9df87e89"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 08:58:47 crc kubenswrapper[5043]: I1125 08:58:47.278050 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://b91dcbdf58dc44451ef7983d0342c37bd2fb59647c4740555fd701bf9df87e89" gracePeriod=600 Nov 25 08:58:48 crc kubenswrapper[5043]: I1125 08:58:48.165575 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="b91dcbdf58dc44451ef7983d0342c37bd2fb59647c4740555fd701bf9df87e89" exitCode=0 Nov 25 08:58:48 crc kubenswrapper[5043]: I1125 08:58:48.165686 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"b91dcbdf58dc44451ef7983d0342c37bd2fb59647c4740555fd701bf9df87e89"} Nov 25 08:58:48 crc kubenswrapper[5043]: I1125 08:58:48.165859 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be"} Nov 25 08:58:48 crc kubenswrapper[5043]: I1125 08:58:48.165882 5043 scope.go:117] "RemoveContainer" containerID="582af160f3cfcfe8a80184d178cf2a2a73851d3c9b24cb025c09601c3046d162" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.195779 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7"] Nov 25 09:00:00 crc kubenswrapper[5043]: E1125 09:00:00.197013 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a4385e-d131-4535-ad5b-cbb4c0458cc5" containerName="extract-content" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.197035 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a4385e-d131-4535-ad5b-cbb4c0458cc5" containerName="extract-content" Nov 25 09:00:00 crc kubenswrapper[5043]: E1125 09:00:00.197088 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a4385e-d131-4535-ad5b-cbb4c0458cc5" containerName="registry-server" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.197097 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a4385e-d131-4535-ad5b-cbb4c0458cc5" containerName="registry-server" Nov 25 09:00:00 crc kubenswrapper[5043]: E1125 09:00:00.197114 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a4385e-d131-4535-ad5b-cbb4c0458cc5" containerName="extract-utilities" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.197121 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a4385e-d131-4535-ad5b-cbb4c0458cc5" containerName="extract-utilities" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.197453 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a4385e-d131-4535-ad5b-cbb4c0458cc5" containerName="registry-server" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.198515 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.202575 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.203629 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.211951 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7"] Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.389997 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rh28\" (UniqueName: \"kubernetes.io/projected/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-kube-api-access-6rh28\") pod \"collect-profiles-29401020-92nb7\" (UID: \"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.390190 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-secret-volume\") pod \"collect-profiles-29401020-92nb7\" (UID: \"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.390237 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-config-volume\") pod \"collect-profiles-29401020-92nb7\" (UID: \"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.499463 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-secret-volume\") pod \"collect-profiles-29401020-92nb7\" (UID: \"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.499589 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-config-volume\") pod \"collect-profiles-29401020-92nb7\" (UID: \"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.499825 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rh28\" (UniqueName: \"kubernetes.io/projected/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-kube-api-access-6rh28\") pod \"collect-profiles-29401020-92nb7\" (UID: \"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.500743 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-config-volume\") pod \"collect-profiles-29401020-92nb7\" (UID: \"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.515507 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-secret-volume\") pod \"collect-profiles-29401020-92nb7\" (UID: \"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.518255 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rh28\" (UniqueName: \"kubernetes.io/projected/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-kube-api-access-6rh28\") pod \"collect-profiles-29401020-92nb7\" (UID: \"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" Nov 25 09:00:00 crc kubenswrapper[5043]: I1125 09:00:00.525306 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" Nov 25 09:00:01 crc kubenswrapper[5043]: I1125 09:00:01.008131 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7"] Nov 25 09:00:01 crc kubenswrapper[5043]: I1125 09:00:01.999416 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" event={"ID":"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9","Type":"ContainerStarted","Data":"26524a591e6928960443e301eab43d6a0419102493d028a0a3bbe91cce5cbc3a"} Nov 25 09:00:01 crc kubenswrapper[5043]: I1125 09:00:01.999698 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" event={"ID":"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9","Type":"ContainerStarted","Data":"8996b0e7b76ff3f5de503fd76df43faacb5a9686ef06259c777feb5ba6671319"} Nov 25 09:00:02 crc kubenswrapper[5043]: I1125 09:00:02.027202 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" podStartSLOduration=2.027173117 podStartE2EDuration="2.027173117s" podCreationTimestamp="2025-11-25 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:00:02.02147062 +0000 UTC m=+6266.189666361" watchObservedRunningTime="2025-11-25 09:00:02.027173117 +0000 UTC m=+6266.195368878" Nov 25 09:00:03 crc kubenswrapper[5043]: I1125 09:00:03.013427 5043 generic.go:334] "Generic (PLEG): container finished" podID="a11ac6f6-3366-4333-98e5-dfcc01bb7ab9" containerID="26524a591e6928960443e301eab43d6a0419102493d028a0a3bbe91cce5cbc3a" exitCode=0 Nov 25 09:00:03 crc kubenswrapper[5043]: I1125 09:00:03.013650 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" event={"ID":"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9","Type":"ContainerDied","Data":"26524a591e6928960443e301eab43d6a0419102493d028a0a3bbe91cce5cbc3a"} Nov 25 09:00:04 crc kubenswrapper[5043]: I1125 09:00:04.513983 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" Nov 25 09:00:04 crc kubenswrapper[5043]: I1125 09:00:04.691807 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-config-volume\") pod \"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9\" (UID: \"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9\") " Nov 25 09:00:04 crc kubenswrapper[5043]: I1125 09:00:04.691878 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-secret-volume\") pod \"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9\" (UID: \"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9\") " Nov 25 09:00:04 crc kubenswrapper[5043]: I1125 09:00:04.692105 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rh28\" (UniqueName: \"kubernetes.io/projected/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-kube-api-access-6rh28\") pod \"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9\" (UID: \"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9\") " Nov 25 09:00:04 crc kubenswrapper[5043]: I1125 09:00:04.698060 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-kube-api-access-6rh28" (OuterVolumeSpecName: "kube-api-access-6rh28") pod "a11ac6f6-3366-4333-98e5-dfcc01bb7ab9" (UID: "a11ac6f6-3366-4333-98e5-dfcc01bb7ab9"). InnerVolumeSpecName "kube-api-access-6rh28". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:00:04 crc kubenswrapper[5043]: I1125 09:00:04.745886 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a11ac6f6-3366-4333-98e5-dfcc01bb7ab9" (UID: "a11ac6f6-3366-4333-98e5-dfcc01bb7ab9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:00:04 crc kubenswrapper[5043]: I1125 09:00:04.748060 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-config-volume" (OuterVolumeSpecName: "config-volume") pod "a11ac6f6-3366-4333-98e5-dfcc01bb7ab9" (UID: "a11ac6f6-3366-4333-98e5-dfcc01bb7ab9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:00:04 crc kubenswrapper[5043]: I1125 09:00:04.794757 5043 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:04 crc kubenswrapper[5043]: I1125 09:00:04.794792 5043 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:04 crc kubenswrapper[5043]: I1125 09:00:04.794805 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rh28\" (UniqueName: \"kubernetes.io/projected/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9-kube-api-access-6rh28\") on node \"crc\" DevicePath \"\"" Nov 25 09:00:05 crc kubenswrapper[5043]: I1125 09:00:05.040138 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" event={"ID":"a11ac6f6-3366-4333-98e5-dfcc01bb7ab9","Type":"ContainerDied","Data":"8996b0e7b76ff3f5de503fd76df43faacb5a9686ef06259c777feb5ba6671319"} Nov 25 09:00:05 crc kubenswrapper[5043]: I1125 09:00:05.040215 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8996b0e7b76ff3f5de503fd76df43faacb5a9686ef06259c777feb5ba6671319" Nov 25 09:00:05 crc kubenswrapper[5043]: I1125 09:00:05.040221 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7" Nov 25 09:00:05 crc kubenswrapper[5043]: I1125 09:00:05.106954 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws"] Nov 25 09:00:05 crc kubenswrapper[5043]: I1125 09:00:05.117895 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400975-g6rws"] Nov 25 09:00:06 crc kubenswrapper[5043]: I1125 09:00:06.982426 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eecb0dbf-d941-45ee-8783-ba3ce9b2e32e" path="/var/lib/kubelet/pods/eecb0dbf-d941-45ee-8783-ba3ce9b2e32e/volumes" Nov 25 09:00:47 crc kubenswrapper[5043]: I1125 09:00:47.277297 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:00:47 crc kubenswrapper[5043]: I1125 09:00:47.277913 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:00:54 crc kubenswrapper[5043]: I1125 09:00:54.669767 5043 scope.go:117] "RemoveContainer" containerID="64f0a6c083cd5115f9121daab497cb058bcc4deddd3b71a6459c15004d14517d" Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.169498 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29401021-96f9v"] Nov 25 09:01:00 crc kubenswrapper[5043]: E1125 09:01:00.170792 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11ac6f6-3366-4333-98e5-dfcc01bb7ab9" containerName="collect-profiles" Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.170809 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11ac6f6-3366-4333-98e5-dfcc01bb7ab9" containerName="collect-profiles" Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.171070 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="a11ac6f6-3366-4333-98e5-dfcc01bb7ab9" containerName="collect-profiles" Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.171845 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401021-96f9v" Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.235973 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-combined-ca-bundle\") pod \"keystone-cron-29401021-96f9v\" (UID: \"ea3e940a-0a27-41f2-8629-e54ee65e138d\") " pod="openstack/keystone-cron-29401021-96f9v" Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.236081 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-fernet-keys\") pod \"keystone-cron-29401021-96f9v\" (UID: \"ea3e940a-0a27-41f2-8629-e54ee65e138d\") " pod="openstack/keystone-cron-29401021-96f9v" Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.236120 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrg9f\" (UniqueName: \"kubernetes.io/projected/ea3e940a-0a27-41f2-8629-e54ee65e138d-kube-api-access-wrg9f\") pod \"keystone-cron-29401021-96f9v\" (UID: \"ea3e940a-0a27-41f2-8629-e54ee65e138d\") " pod="openstack/keystone-cron-29401021-96f9v" Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.236493 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-config-data\") pod \"keystone-cron-29401021-96f9v\" (UID: \"ea3e940a-0a27-41f2-8629-e54ee65e138d\") " pod="openstack/keystone-cron-29401021-96f9v" Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.247677 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401021-96f9v"] Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.337927 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrg9f\" (UniqueName: \"kubernetes.io/projected/ea3e940a-0a27-41f2-8629-e54ee65e138d-kube-api-access-wrg9f\") pod \"keystone-cron-29401021-96f9v\" (UID: \"ea3e940a-0a27-41f2-8629-e54ee65e138d\") " pod="openstack/keystone-cron-29401021-96f9v" Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.338086 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-config-data\") pod \"keystone-cron-29401021-96f9v\" (UID: \"ea3e940a-0a27-41f2-8629-e54ee65e138d\") " pod="openstack/keystone-cron-29401021-96f9v" Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.338144 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-combined-ca-bundle\") pod \"keystone-cron-29401021-96f9v\" (UID: \"ea3e940a-0a27-41f2-8629-e54ee65e138d\") " pod="openstack/keystone-cron-29401021-96f9v" Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.338173 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-fernet-keys\") pod \"keystone-cron-29401021-96f9v\" (UID: \"ea3e940a-0a27-41f2-8629-e54ee65e138d\") " pod="openstack/keystone-cron-29401021-96f9v" Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.344916 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-fernet-keys\") pod \"keystone-cron-29401021-96f9v\" (UID: \"ea3e940a-0a27-41f2-8629-e54ee65e138d\") " pod="openstack/keystone-cron-29401021-96f9v" Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.344927 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-config-data\") pod \"keystone-cron-29401021-96f9v\" (UID: \"ea3e940a-0a27-41f2-8629-e54ee65e138d\") " pod="openstack/keystone-cron-29401021-96f9v" Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.347414 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-combined-ca-bundle\") pod \"keystone-cron-29401021-96f9v\" (UID: \"ea3e940a-0a27-41f2-8629-e54ee65e138d\") " pod="openstack/keystone-cron-29401021-96f9v" Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.364545 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrg9f\" (UniqueName: \"kubernetes.io/projected/ea3e940a-0a27-41f2-8629-e54ee65e138d-kube-api-access-wrg9f\") pod \"keystone-cron-29401021-96f9v\" (UID: \"ea3e940a-0a27-41f2-8629-e54ee65e138d\") " pod="openstack/keystone-cron-29401021-96f9v" Nov 25 09:01:00 crc kubenswrapper[5043]: I1125 09:01:00.501523 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401021-96f9v" Nov 25 09:01:01 crc kubenswrapper[5043]: I1125 09:01:01.006910 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401021-96f9v"] Nov 25 09:01:01 crc kubenswrapper[5043]: I1125 09:01:01.655975 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401021-96f9v" event={"ID":"ea3e940a-0a27-41f2-8629-e54ee65e138d","Type":"ContainerStarted","Data":"324903ebc4528464de404e51d02613833243e4ee456b5b944d7ce58342580730"} Nov 25 09:01:01 crc kubenswrapper[5043]: I1125 09:01:01.656435 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401021-96f9v" event={"ID":"ea3e940a-0a27-41f2-8629-e54ee65e138d","Type":"ContainerStarted","Data":"918b1818b7d62387a7599367d8760c432d0360f1eb072fbf20c50330449f44ea"} Nov 25 09:01:01 crc kubenswrapper[5043]: I1125 09:01:01.675923 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29401021-96f9v" podStartSLOduration=1.6759006410000001 podStartE2EDuration="1.675900641s" podCreationTimestamp="2025-11-25 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:01:01.675190269 +0000 UTC m=+6325.843386030" watchObservedRunningTime="2025-11-25 09:01:01.675900641 +0000 UTC m=+6325.844096402" Nov 25 09:01:11 crc kubenswrapper[5043]: I1125 09:01:11.796055 5043 generic.go:334] "Generic (PLEG): container finished" podID="ea3e940a-0a27-41f2-8629-e54ee65e138d" containerID="324903ebc4528464de404e51d02613833243e4ee456b5b944d7ce58342580730" exitCode=0 Nov 25 09:01:11 crc kubenswrapper[5043]: I1125 09:01:11.796152 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401021-96f9v" event={"ID":"ea3e940a-0a27-41f2-8629-e54ee65e138d","Type":"ContainerDied","Data":"324903ebc4528464de404e51d02613833243e4ee456b5b944d7ce58342580730"} Nov 25 09:01:13 crc kubenswrapper[5043]: I1125 09:01:13.339841 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401021-96f9v" Nov 25 09:01:13 crc kubenswrapper[5043]: I1125 09:01:13.524033 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-combined-ca-bundle\") pod \"ea3e940a-0a27-41f2-8629-e54ee65e138d\" (UID: \"ea3e940a-0a27-41f2-8629-e54ee65e138d\") " Nov 25 09:01:13 crc kubenswrapper[5043]: I1125 09:01:13.524172 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrg9f\" (UniqueName: \"kubernetes.io/projected/ea3e940a-0a27-41f2-8629-e54ee65e138d-kube-api-access-wrg9f\") pod \"ea3e940a-0a27-41f2-8629-e54ee65e138d\" (UID: \"ea3e940a-0a27-41f2-8629-e54ee65e138d\") " Nov 25 09:01:13 crc kubenswrapper[5043]: I1125 09:01:13.524255 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-fernet-keys\") pod \"ea3e940a-0a27-41f2-8629-e54ee65e138d\" (UID: \"ea3e940a-0a27-41f2-8629-e54ee65e138d\") " Nov 25 09:01:13 crc kubenswrapper[5043]: I1125 09:01:13.524400 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-config-data\") pod \"ea3e940a-0a27-41f2-8629-e54ee65e138d\" (UID: \"ea3e940a-0a27-41f2-8629-e54ee65e138d\") " Nov 25 09:01:13 crc kubenswrapper[5043]: I1125 09:01:13.530659 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ea3e940a-0a27-41f2-8629-e54ee65e138d" (UID: "ea3e940a-0a27-41f2-8629-e54ee65e138d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:01:13 crc kubenswrapper[5043]: I1125 09:01:13.530690 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3e940a-0a27-41f2-8629-e54ee65e138d-kube-api-access-wrg9f" (OuterVolumeSpecName: "kube-api-access-wrg9f") pod "ea3e940a-0a27-41f2-8629-e54ee65e138d" (UID: "ea3e940a-0a27-41f2-8629-e54ee65e138d"). InnerVolumeSpecName "kube-api-access-wrg9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:01:13 crc kubenswrapper[5043]: I1125 09:01:13.554730 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea3e940a-0a27-41f2-8629-e54ee65e138d" (UID: "ea3e940a-0a27-41f2-8629-e54ee65e138d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:01:13 crc kubenswrapper[5043]: I1125 09:01:13.592051 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-config-data" (OuterVolumeSpecName: "config-data") pod "ea3e940a-0a27-41f2-8629-e54ee65e138d" (UID: "ea3e940a-0a27-41f2-8629-e54ee65e138d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:01:13 crc kubenswrapper[5043]: I1125 09:01:13.626873 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrg9f\" (UniqueName: \"kubernetes.io/projected/ea3e940a-0a27-41f2-8629-e54ee65e138d-kube-api-access-wrg9f\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:13 crc kubenswrapper[5043]: I1125 09:01:13.626905 5043 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:13 crc kubenswrapper[5043]: I1125 09:01:13.626915 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:13 crc kubenswrapper[5043]: I1125 09:01:13.626925 5043 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea3e940a-0a27-41f2-8629-e54ee65e138d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 09:01:13 crc kubenswrapper[5043]: I1125 09:01:13.819770 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401021-96f9v" event={"ID":"ea3e940a-0a27-41f2-8629-e54ee65e138d","Type":"ContainerDied","Data":"918b1818b7d62387a7599367d8760c432d0360f1eb072fbf20c50330449f44ea"} Nov 25 09:01:13 crc kubenswrapper[5043]: I1125 09:01:13.819818 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401021-96f9v" Nov 25 09:01:13 crc kubenswrapper[5043]: I1125 09:01:13.819824 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="918b1818b7d62387a7599367d8760c432d0360f1eb072fbf20c50330449f44ea" Nov 25 09:01:17 crc kubenswrapper[5043]: I1125 09:01:17.276832 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:01:17 crc kubenswrapper[5043]: I1125 09:01:17.277380 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:01:47 crc kubenswrapper[5043]: I1125 09:01:47.277785 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:01:47 crc kubenswrapper[5043]: I1125 09:01:47.278693 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:01:47 crc kubenswrapper[5043]: I1125 09:01:47.278910 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 09:01:47 crc kubenswrapper[5043]: I1125 09:01:47.280325 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:01:47 crc kubenswrapper[5043]: I1125 09:01:47.280476 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" gracePeriod=600 Nov 25 09:01:47 crc kubenswrapper[5043]: E1125 09:01:47.406176 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:01:48 crc kubenswrapper[5043]: I1125 09:01:48.176626 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" exitCode=0 Nov 25 09:01:48 crc kubenswrapper[5043]: I1125 09:01:48.176669 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be"} Nov 25 09:01:48 crc kubenswrapper[5043]: I1125 09:01:48.176714 5043 scope.go:117] "RemoveContainer" containerID="b91dcbdf58dc44451ef7983d0342c37bd2fb59647c4740555fd701bf9df87e89" Nov 25 09:01:48 crc kubenswrapper[5043]: I1125 09:01:48.177778 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:01:48 crc kubenswrapper[5043]: E1125 09:01:48.178315 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:02:01 crc kubenswrapper[5043]: I1125 09:02:01.963860 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:02:01 crc kubenswrapper[5043]: E1125 09:02:01.965004 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:02:13 crc kubenswrapper[5043]: I1125 09:02:13.963195 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:02:13 crc kubenswrapper[5043]: E1125 09:02:13.964277 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:02:27 crc kubenswrapper[5043]: I1125 09:02:27.963653 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:02:27 crc kubenswrapper[5043]: E1125 09:02:27.964478 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:02:41 crc kubenswrapper[5043]: I1125 09:02:41.963273 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:02:41 crc kubenswrapper[5043]: E1125 09:02:41.964162 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:02:56 crc kubenswrapper[5043]: I1125 09:02:56.976377 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:02:56 crc kubenswrapper[5043]: E1125 09:02:56.978155 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:03:05 crc kubenswrapper[5043]: I1125 09:03:05.723423 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bn6tl"] Nov 25 09:03:05 crc kubenswrapper[5043]: E1125 09:03:05.724421 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3e940a-0a27-41f2-8629-e54ee65e138d" containerName="keystone-cron" Nov 25 09:03:05 crc kubenswrapper[5043]: I1125 09:03:05.724434 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3e940a-0a27-41f2-8629-e54ee65e138d" containerName="keystone-cron" Nov 25 09:03:05 crc kubenswrapper[5043]: I1125 09:03:05.724629 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3e940a-0a27-41f2-8629-e54ee65e138d" containerName="keystone-cron" Nov 25 09:03:05 crc kubenswrapper[5043]: I1125 09:03:05.726157 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn6tl" Nov 25 09:03:05 crc kubenswrapper[5043]: I1125 09:03:05.737939 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bn6tl"] Nov 25 09:03:05 crc kubenswrapper[5043]: I1125 09:03:05.808654 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0b8135e-a7a5-462c-9ba8-0d36b6778807-utilities\") pod \"certified-operators-bn6tl\" (UID: \"e0b8135e-a7a5-462c-9ba8-0d36b6778807\") " pod="openshift-marketplace/certified-operators-bn6tl" Nov 25 09:03:05 crc kubenswrapper[5043]: I1125 09:03:05.809232 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c7ls\" (UniqueName: \"kubernetes.io/projected/e0b8135e-a7a5-462c-9ba8-0d36b6778807-kube-api-access-7c7ls\") pod \"certified-operators-bn6tl\" (UID: \"e0b8135e-a7a5-462c-9ba8-0d36b6778807\") " pod="openshift-marketplace/certified-operators-bn6tl" Nov 25 09:03:05 crc kubenswrapper[5043]: I1125 09:03:05.809419 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0b8135e-a7a5-462c-9ba8-0d36b6778807-catalog-content\") pod \"certified-operators-bn6tl\" (UID: \"e0b8135e-a7a5-462c-9ba8-0d36b6778807\") " pod="openshift-marketplace/certified-operators-bn6tl" Nov 25 09:03:05 crc kubenswrapper[5043]: I1125 09:03:05.919820 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0b8135e-a7a5-462c-9ba8-0d36b6778807-utilities\") pod \"certified-operators-bn6tl\" (UID: \"e0b8135e-a7a5-462c-9ba8-0d36b6778807\") " pod="openshift-marketplace/certified-operators-bn6tl" Nov 25 09:03:05 crc kubenswrapper[5043]: I1125 09:03:05.920033 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c7ls\" (UniqueName: \"kubernetes.io/projected/e0b8135e-a7a5-462c-9ba8-0d36b6778807-kube-api-access-7c7ls\") pod \"certified-operators-bn6tl\" (UID: \"e0b8135e-a7a5-462c-9ba8-0d36b6778807\") " pod="openshift-marketplace/certified-operators-bn6tl" Nov 25 09:03:05 crc kubenswrapper[5043]: I1125 09:03:05.920114 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0b8135e-a7a5-462c-9ba8-0d36b6778807-catalog-content\") pod \"certified-operators-bn6tl\" (UID: \"e0b8135e-a7a5-462c-9ba8-0d36b6778807\") " pod="openshift-marketplace/certified-operators-bn6tl" Nov 25 09:03:05 crc kubenswrapper[5043]: I1125 09:03:05.920520 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0b8135e-a7a5-462c-9ba8-0d36b6778807-utilities\") pod \"certified-operators-bn6tl\" (UID: \"e0b8135e-a7a5-462c-9ba8-0d36b6778807\") " pod="openshift-marketplace/certified-operators-bn6tl" Nov 25 09:03:05 crc kubenswrapper[5043]: I1125 09:03:05.920667 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0b8135e-a7a5-462c-9ba8-0d36b6778807-catalog-content\") pod \"certified-operators-bn6tl\" (UID: \"e0b8135e-a7a5-462c-9ba8-0d36b6778807\") " pod="openshift-marketplace/certified-operators-bn6tl" Nov 25 09:03:05 crc kubenswrapper[5043]: I1125 09:03:05.940058 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c7ls\" (UniqueName: \"kubernetes.io/projected/e0b8135e-a7a5-462c-9ba8-0d36b6778807-kube-api-access-7c7ls\") pod \"certified-operators-bn6tl\" (UID: \"e0b8135e-a7a5-462c-9ba8-0d36b6778807\") " pod="openshift-marketplace/certified-operators-bn6tl" Nov 25 09:03:06 crc kubenswrapper[5043]: I1125 09:03:06.047497 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn6tl" Nov 25 09:03:06 crc kubenswrapper[5043]: I1125 09:03:06.591967 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bn6tl"] Nov 25 09:03:07 crc kubenswrapper[5043]: I1125 09:03:07.257248 5043 generic.go:334] "Generic (PLEG): container finished" podID="e0b8135e-a7a5-462c-9ba8-0d36b6778807" containerID="9b80cf58dac638c5d45b4ca815d36e3a638b0643a89bfc59293db0f94e399b5f" exitCode=0 Nov 25 09:03:07 crc kubenswrapper[5043]: I1125 09:03:07.257555 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn6tl" event={"ID":"e0b8135e-a7a5-462c-9ba8-0d36b6778807","Type":"ContainerDied","Data":"9b80cf58dac638c5d45b4ca815d36e3a638b0643a89bfc59293db0f94e399b5f"} Nov 25 09:03:07 crc kubenswrapper[5043]: I1125 09:03:07.257624 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn6tl" event={"ID":"e0b8135e-a7a5-462c-9ba8-0d36b6778807","Type":"ContainerStarted","Data":"34ffd990f6d145389aefab2d69e142dae1d10252565575956ee8d61f798add13"} Nov 25 09:03:08 crc kubenswrapper[5043]: I1125 09:03:08.963297 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:03:08 crc kubenswrapper[5043]: E1125 09:03:08.963885 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:03:12 crc kubenswrapper[5043]: I1125 09:03:12.320102 5043 generic.go:334] "Generic (PLEG): container finished" podID="e0b8135e-a7a5-462c-9ba8-0d36b6778807" containerID="0c109e6c4f63a79af230b8fd38ebd45c73365af043c3be54f12816865afa1e7d" exitCode=0 Nov 25 09:03:12 crc kubenswrapper[5043]: I1125 09:03:12.320197 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn6tl" event={"ID":"e0b8135e-a7a5-462c-9ba8-0d36b6778807","Type":"ContainerDied","Data":"0c109e6c4f63a79af230b8fd38ebd45c73365af043c3be54f12816865afa1e7d"} Nov 25 09:03:13 crc kubenswrapper[5043]: I1125 09:03:13.334527 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn6tl" event={"ID":"e0b8135e-a7a5-462c-9ba8-0d36b6778807","Type":"ContainerStarted","Data":"26b6357fb5f67a1b11cc99ad060a310fac99d2a1a000447de17e1ee0874b192e"} Nov 25 09:03:13 crc kubenswrapper[5043]: I1125 09:03:13.361431 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bn6tl" podStartSLOduration=2.86968108 podStartE2EDuration="8.361409041s" podCreationTimestamp="2025-11-25 09:03:05 +0000 UTC" firstStartedPulling="2025-11-25 09:03:07.259409715 +0000 UTC m=+6451.427605456" lastFinishedPulling="2025-11-25 09:03:12.751137696 +0000 UTC m=+6456.919333417" observedRunningTime="2025-11-25 09:03:13.35281654 +0000 UTC m=+6457.521012301" watchObservedRunningTime="2025-11-25 09:03:13.361409041 +0000 UTC m=+6457.529604782" Nov 25 09:03:16 crc kubenswrapper[5043]: I1125 09:03:16.047665 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bn6tl" Nov 25 09:03:16 crc kubenswrapper[5043]: I1125 09:03:16.048594 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bn6tl" Nov 25 09:03:16 crc kubenswrapper[5043]: I1125 09:03:16.111155 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bn6tl" Nov 25 09:03:23 crc kubenswrapper[5043]: I1125 09:03:23.963908 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:03:23 crc kubenswrapper[5043]: E1125 09:03:23.964856 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:03:26 crc kubenswrapper[5043]: I1125 09:03:26.105769 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bn6tl" Nov 25 09:03:26 crc kubenswrapper[5043]: I1125 09:03:26.191538 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bn6tl"] Nov 25 09:03:26 crc kubenswrapper[5043]: I1125 09:03:26.250221 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wtzwd"] Nov 25 09:03:26 crc kubenswrapper[5043]: I1125 09:03:26.250565 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wtzwd" podUID="e8c0d1a1-f41e-485e-9e10-f897895ee5f4" containerName="registry-server" containerID="cri-o://90bd7d88733ed5b97439021a66792258008dfbef5e71f3243e9986efabcdb2f2" gracePeriod=2 Nov 25 09:03:26 crc kubenswrapper[5043]: I1125 09:03:26.699671 5043 generic.go:334] "Generic (PLEG): container finished" podID="e8c0d1a1-f41e-485e-9e10-f897895ee5f4" containerID="90bd7d88733ed5b97439021a66792258008dfbef5e71f3243e9986efabcdb2f2" exitCode=0 Nov 25 09:03:26 crc kubenswrapper[5043]: I1125 09:03:26.699748 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtzwd" event={"ID":"e8c0d1a1-f41e-485e-9e10-f897895ee5f4","Type":"ContainerDied","Data":"90bd7d88733ed5b97439021a66792258008dfbef5e71f3243e9986efabcdb2f2"} Nov 25 09:03:26 crc kubenswrapper[5043]: I1125 09:03:26.846153 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wtzwd" Nov 25 09:03:26 crc kubenswrapper[5043]: I1125 09:03:26.984930 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-catalog-content\") pod \"e8c0d1a1-f41e-485e-9e10-f897895ee5f4\" (UID: \"e8c0d1a1-f41e-485e-9e10-f897895ee5f4\") " Nov 25 09:03:26 crc kubenswrapper[5043]: I1125 09:03:26.985269 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-utilities\") pod \"e8c0d1a1-f41e-485e-9e10-f897895ee5f4\" (UID: \"e8c0d1a1-f41e-485e-9e10-f897895ee5f4\") " Nov 25 09:03:26 crc kubenswrapper[5043]: I1125 09:03:26.985391 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcsk9\" (UniqueName: \"kubernetes.io/projected/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-kube-api-access-rcsk9\") pod \"e8c0d1a1-f41e-485e-9e10-f897895ee5f4\" (UID: \"e8c0d1a1-f41e-485e-9e10-f897895ee5f4\") " Nov 25 09:03:26 crc kubenswrapper[5043]: I1125 09:03:26.985715 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-utilities" (OuterVolumeSpecName: "utilities") pod "e8c0d1a1-f41e-485e-9e10-f897895ee5f4" (UID: "e8c0d1a1-f41e-485e-9e10-f897895ee5f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:03:26 crc kubenswrapper[5043]: I1125 09:03:26.986377 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:03:26 crc kubenswrapper[5043]: I1125 09:03:26.996447 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-kube-api-access-rcsk9" (OuterVolumeSpecName: "kube-api-access-rcsk9") pod "e8c0d1a1-f41e-485e-9e10-f897895ee5f4" (UID: "e8c0d1a1-f41e-485e-9e10-f897895ee5f4"). InnerVolumeSpecName "kube-api-access-rcsk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:03:27 crc kubenswrapper[5043]: I1125 09:03:27.067761 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8c0d1a1-f41e-485e-9e10-f897895ee5f4" (UID: "e8c0d1a1-f41e-485e-9e10-f897895ee5f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:03:27 crc kubenswrapper[5043]: I1125 09:03:27.088862 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcsk9\" (UniqueName: \"kubernetes.io/projected/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-kube-api-access-rcsk9\") on node \"crc\" DevicePath \"\"" Nov 25 09:03:27 crc kubenswrapper[5043]: I1125 09:03:27.088898 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c0d1a1-f41e-485e-9e10-f897895ee5f4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:03:27 crc kubenswrapper[5043]: I1125 09:03:27.717282 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wtzwd" event={"ID":"e8c0d1a1-f41e-485e-9e10-f897895ee5f4","Type":"ContainerDied","Data":"c20aabe0d1d0d80c5f004686061d2f88ca91998859a2a587ba9d61bff81b30a2"} Nov 25 09:03:27 crc kubenswrapper[5043]: I1125 09:03:27.717340 5043 scope.go:117] "RemoveContainer" containerID="90bd7d88733ed5b97439021a66792258008dfbef5e71f3243e9986efabcdb2f2" Nov 25 09:03:27 crc kubenswrapper[5043]: I1125 09:03:27.717399 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wtzwd" Nov 25 09:03:27 crc kubenswrapper[5043]: I1125 09:03:27.759650 5043 scope.go:117] "RemoveContainer" containerID="d4efc7bef7a52e2d9fbb9d11da1939bd84b3a28bd0c56a3ebf8317801c94e159" Nov 25 09:03:27 crc kubenswrapper[5043]: I1125 09:03:27.774234 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wtzwd"] Nov 25 09:03:27 crc kubenswrapper[5043]: I1125 09:03:27.784792 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wtzwd"] Nov 25 09:03:27 crc kubenswrapper[5043]: I1125 09:03:27.792782 5043 scope.go:117] "RemoveContainer" containerID="209a46f188029b7886c33284f77dd52ea18eb1485b486071469004cd77fff744" Nov 25 09:03:28 crc kubenswrapper[5043]: I1125 09:03:28.976046 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c0d1a1-f41e-485e-9e10-f897895ee5f4" path="/var/lib/kubelet/pods/e8c0d1a1-f41e-485e-9e10-f897895ee5f4/volumes" Nov 25 09:03:35 crc kubenswrapper[5043]: I1125 09:03:35.963305 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:03:35 crc kubenswrapper[5043]: E1125 09:03:35.964390 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:03:49 crc kubenswrapper[5043]: I1125 09:03:49.962865 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:03:49 crc kubenswrapper[5043]: E1125 09:03:49.964397 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:04:01 crc kubenswrapper[5043]: I1125 09:04:01.963652 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:04:01 crc kubenswrapper[5043]: E1125 09:04:01.966960 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:04:13 crc kubenswrapper[5043]: I1125 09:04:13.962519 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:04:13 crc kubenswrapper[5043]: E1125 09:04:13.963463 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:04:27 crc kubenswrapper[5043]: I1125 09:04:27.962509 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:04:27 crc kubenswrapper[5043]: E1125 09:04:27.963378 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:04:39 crc kubenswrapper[5043]: I1125 09:04:39.962664 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:04:39 crc kubenswrapper[5043]: E1125 09:04:39.963483 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:04:52 crc kubenswrapper[5043]: I1125 09:04:52.963067 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:04:52 crc kubenswrapper[5043]: E1125 09:04:52.963815 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.405216 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nmwkf"] Nov 25 09:05:07 crc kubenswrapper[5043]: E1125 09:05:07.406250 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c0d1a1-f41e-485e-9e10-f897895ee5f4" containerName="extract-content" Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.406266 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c0d1a1-f41e-485e-9e10-f897895ee5f4" containerName="extract-content" Nov 25 09:05:07 crc kubenswrapper[5043]: E1125 09:05:07.406319 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c0d1a1-f41e-485e-9e10-f897895ee5f4" containerName="registry-server" Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.406329 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c0d1a1-f41e-485e-9e10-f897895ee5f4" containerName="registry-server" Nov 25 09:05:07 crc kubenswrapper[5043]: E1125 09:05:07.406342 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c0d1a1-f41e-485e-9e10-f897895ee5f4" containerName="extract-utilities" Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.406352 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c0d1a1-f41e-485e-9e10-f897895ee5f4" containerName="extract-utilities" Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.406590 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8c0d1a1-f41e-485e-9e10-f897895ee5f4" containerName="registry-server" Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.408339 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmwkf" Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.415739 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nmwkf"] Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.499074 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6thq\" (UniqueName: \"kubernetes.io/projected/c9319523-33a1-47e0-9db1-406088fe12f4-kube-api-access-j6thq\") pod \"redhat-operators-nmwkf\" (UID: \"c9319523-33a1-47e0-9db1-406088fe12f4\") " pod="openshift-marketplace/redhat-operators-nmwkf" Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.499479 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9319523-33a1-47e0-9db1-406088fe12f4-utilities\") pod \"redhat-operators-nmwkf\" (UID: \"c9319523-33a1-47e0-9db1-406088fe12f4\") " pod="openshift-marketplace/redhat-operators-nmwkf" Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.499633 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9319523-33a1-47e0-9db1-406088fe12f4-catalog-content\") pod \"redhat-operators-nmwkf\" (UID: \"c9319523-33a1-47e0-9db1-406088fe12f4\") " pod="openshift-marketplace/redhat-operators-nmwkf" Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.601871 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6thq\" (UniqueName: \"kubernetes.io/projected/c9319523-33a1-47e0-9db1-406088fe12f4-kube-api-access-j6thq\") pod \"redhat-operators-nmwkf\" (UID: \"c9319523-33a1-47e0-9db1-406088fe12f4\") " pod="openshift-marketplace/redhat-operators-nmwkf" Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.602012 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9319523-33a1-47e0-9db1-406088fe12f4-utilities\") pod \"redhat-operators-nmwkf\" (UID: \"c9319523-33a1-47e0-9db1-406088fe12f4\") " pod="openshift-marketplace/redhat-operators-nmwkf" Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.602041 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9319523-33a1-47e0-9db1-406088fe12f4-catalog-content\") pod \"redhat-operators-nmwkf\" (UID: \"c9319523-33a1-47e0-9db1-406088fe12f4\") " pod="openshift-marketplace/redhat-operators-nmwkf" Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.602514 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9319523-33a1-47e0-9db1-406088fe12f4-utilities\") pod \"redhat-operators-nmwkf\" (UID: \"c9319523-33a1-47e0-9db1-406088fe12f4\") " pod="openshift-marketplace/redhat-operators-nmwkf" Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.602563 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9319523-33a1-47e0-9db1-406088fe12f4-catalog-content\") pod \"redhat-operators-nmwkf\" (UID: \"c9319523-33a1-47e0-9db1-406088fe12f4\") " pod="openshift-marketplace/redhat-operators-nmwkf" Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.629882 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6thq\" (UniqueName: \"kubernetes.io/projected/c9319523-33a1-47e0-9db1-406088fe12f4-kube-api-access-j6thq\") pod \"redhat-operators-nmwkf\" (UID: \"c9319523-33a1-47e0-9db1-406088fe12f4\") " pod="openshift-marketplace/redhat-operators-nmwkf" Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.732750 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmwkf" Nov 25 09:05:07 crc kubenswrapper[5043]: I1125 09:05:07.962357 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:05:07 crc kubenswrapper[5043]: E1125 09:05:07.962793 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:05:08 crc kubenswrapper[5043]: I1125 09:05:08.208370 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nmwkf"] Nov 25 09:05:08 crc kubenswrapper[5043]: I1125 09:05:08.729057 5043 generic.go:334] "Generic (PLEG): container finished" podID="c9319523-33a1-47e0-9db1-406088fe12f4" containerID="ff88edf028c8b99df847474833269e5836f8343ad1aef587ab7b62d4edb6ce40" exitCode=0 Nov 25 09:05:08 crc kubenswrapper[5043]: I1125 09:05:08.729116 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmwkf" event={"ID":"c9319523-33a1-47e0-9db1-406088fe12f4","Type":"ContainerDied","Data":"ff88edf028c8b99df847474833269e5836f8343ad1aef587ab7b62d4edb6ce40"} Nov 25 09:05:08 crc kubenswrapper[5043]: I1125 09:05:08.729357 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmwkf" event={"ID":"c9319523-33a1-47e0-9db1-406088fe12f4","Type":"ContainerStarted","Data":"13f724445def52201a706f881a1eb7fb9a0f1f593e2fcb05a6a57b83a1cce2c6"} Nov 25 09:05:08 crc kubenswrapper[5043]: I1125 09:05:08.731132 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 09:05:10 crc kubenswrapper[5043]: I1125 09:05:10.759973 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmwkf" event={"ID":"c9319523-33a1-47e0-9db1-406088fe12f4","Type":"ContainerStarted","Data":"462a2b0615af67fc9be89836d5e216ab6ca98035dbc0cabff79151cf1e99d3fe"} Nov 25 09:05:15 crc kubenswrapper[5043]: I1125 09:05:15.810421 5043 generic.go:334] "Generic (PLEG): container finished" podID="c9319523-33a1-47e0-9db1-406088fe12f4" containerID="462a2b0615af67fc9be89836d5e216ab6ca98035dbc0cabff79151cf1e99d3fe" exitCode=0 Nov 25 09:05:15 crc kubenswrapper[5043]: I1125 09:05:15.810516 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmwkf" event={"ID":"c9319523-33a1-47e0-9db1-406088fe12f4","Type":"ContainerDied","Data":"462a2b0615af67fc9be89836d5e216ab6ca98035dbc0cabff79151cf1e99d3fe"} Nov 25 09:05:16 crc kubenswrapper[5043]: I1125 09:05:16.827249 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmwkf" event={"ID":"c9319523-33a1-47e0-9db1-406088fe12f4","Type":"ContainerStarted","Data":"2a520265f29074e0d5b141c42e61bdf12ceef20d7c7713faf5427d0ebffed1c5"} Nov 25 09:05:16 crc kubenswrapper[5043]: I1125 09:05:16.852935 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nmwkf" podStartSLOduration=2.102643303 podStartE2EDuration="9.852910048s" podCreationTimestamp="2025-11-25 09:05:07 +0000 UTC" firstStartedPulling="2025-11-25 09:05:08.730840171 +0000 UTC m=+6572.899035912" lastFinishedPulling="2025-11-25 09:05:16.481106936 +0000 UTC m=+6580.649302657" observedRunningTime="2025-11-25 09:05:16.844180493 +0000 UTC m=+6581.012376214" watchObservedRunningTime="2025-11-25 09:05:16.852910048 +0000 UTC m=+6581.021105799" Nov 25 09:05:17 crc kubenswrapper[5043]: I1125 09:05:17.733565 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nmwkf" Nov 25 09:05:17 crc kubenswrapper[5043]: I1125 09:05:17.733663 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nmwkf" Nov 25 09:05:18 crc kubenswrapper[5043]: I1125 09:05:18.781373 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nmwkf" podUID="c9319523-33a1-47e0-9db1-406088fe12f4" containerName="registry-server" probeResult="failure" output=< Nov 25 09:05:18 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 09:05:18 crc kubenswrapper[5043]: > Nov 25 09:05:18 crc kubenswrapper[5043]: I1125 09:05:18.963578 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:05:18 crc kubenswrapper[5043]: E1125 09:05:18.963948 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:05:27 crc kubenswrapper[5043]: I1125 09:05:27.806644 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nmwkf" Nov 25 09:05:27 crc kubenswrapper[5043]: I1125 09:05:27.866990 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nmwkf" Nov 25 09:05:28 crc kubenswrapper[5043]: I1125 09:05:28.065577 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nmwkf"] Nov 25 09:05:28 crc kubenswrapper[5043]: I1125 09:05:28.951975 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nmwkf" podUID="c9319523-33a1-47e0-9db1-406088fe12f4" containerName="registry-server" containerID="cri-o://2a520265f29074e0d5b141c42e61bdf12ceef20d7c7713faf5427d0ebffed1c5" gracePeriod=2 Nov 25 09:05:29 crc kubenswrapper[5043]: I1125 09:05:29.572490 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmwkf" Nov 25 09:05:29 crc kubenswrapper[5043]: I1125 09:05:29.590099 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9319523-33a1-47e0-9db1-406088fe12f4-catalog-content\") pod \"c9319523-33a1-47e0-9db1-406088fe12f4\" (UID: \"c9319523-33a1-47e0-9db1-406088fe12f4\") " Nov 25 09:05:29 crc kubenswrapper[5043]: I1125 09:05:29.590378 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6thq\" (UniqueName: \"kubernetes.io/projected/c9319523-33a1-47e0-9db1-406088fe12f4-kube-api-access-j6thq\") pod \"c9319523-33a1-47e0-9db1-406088fe12f4\" (UID: \"c9319523-33a1-47e0-9db1-406088fe12f4\") " Nov 25 09:05:29 crc kubenswrapper[5043]: I1125 09:05:29.590670 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9319523-33a1-47e0-9db1-406088fe12f4-utilities\") pod \"c9319523-33a1-47e0-9db1-406088fe12f4\" (UID: \"c9319523-33a1-47e0-9db1-406088fe12f4\") " Nov 25 09:05:29 crc kubenswrapper[5043]: I1125 09:05:29.591507 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9319523-33a1-47e0-9db1-406088fe12f4-utilities" (OuterVolumeSpecName: "utilities") pod "c9319523-33a1-47e0-9db1-406088fe12f4" (UID: "c9319523-33a1-47e0-9db1-406088fe12f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:05:29 crc kubenswrapper[5043]: I1125 09:05:29.600852 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9319523-33a1-47e0-9db1-406088fe12f4-kube-api-access-j6thq" (OuterVolumeSpecName: "kube-api-access-j6thq") pod "c9319523-33a1-47e0-9db1-406088fe12f4" (UID: "c9319523-33a1-47e0-9db1-406088fe12f4"). InnerVolumeSpecName "kube-api-access-j6thq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:05:29 crc kubenswrapper[5043]: I1125 09:05:29.692971 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9319523-33a1-47e0-9db1-406088fe12f4-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:05:29 crc kubenswrapper[5043]: I1125 09:05:29.693009 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6thq\" (UniqueName: \"kubernetes.io/projected/c9319523-33a1-47e0-9db1-406088fe12f4-kube-api-access-j6thq\") on node \"crc\" DevicePath \"\"" Nov 25 09:05:29 crc kubenswrapper[5043]: I1125 09:05:29.694426 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9319523-33a1-47e0-9db1-406088fe12f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9319523-33a1-47e0-9db1-406088fe12f4" (UID: "c9319523-33a1-47e0-9db1-406088fe12f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:05:29 crc kubenswrapper[5043]: I1125 09:05:29.795062 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9319523-33a1-47e0-9db1-406088fe12f4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:05:29 crc kubenswrapper[5043]: I1125 09:05:29.963482 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:05:29 crc kubenswrapper[5043]: I1125 09:05:29.963504 5043 generic.go:334] "Generic (PLEG): container finished" podID="c9319523-33a1-47e0-9db1-406088fe12f4" containerID="2a520265f29074e0d5b141c42e61bdf12ceef20d7c7713faf5427d0ebffed1c5" exitCode=0 Nov 25 09:05:29 crc kubenswrapper[5043]: I1125 09:05:29.963547 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmwkf" Nov 25 09:05:29 crc kubenswrapper[5043]: I1125 09:05:29.963562 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmwkf" event={"ID":"c9319523-33a1-47e0-9db1-406088fe12f4","Type":"ContainerDied","Data":"2a520265f29074e0d5b141c42e61bdf12ceef20d7c7713faf5427d0ebffed1c5"} Nov 25 09:05:29 crc kubenswrapper[5043]: I1125 09:05:29.963662 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmwkf" event={"ID":"c9319523-33a1-47e0-9db1-406088fe12f4","Type":"ContainerDied","Data":"13f724445def52201a706f881a1eb7fb9a0f1f593e2fcb05a6a57b83a1cce2c6"} Nov 25 09:05:29 crc kubenswrapper[5043]: I1125 09:05:29.963698 5043 scope.go:117] "RemoveContainer" containerID="2a520265f29074e0d5b141c42e61bdf12ceef20d7c7713faf5427d0ebffed1c5" Nov 25 09:05:29 crc kubenswrapper[5043]: E1125 09:05:29.963893 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:05:30 crc kubenswrapper[5043]: I1125 09:05:30.013211 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nmwkf"] Nov 25 09:05:30 crc kubenswrapper[5043]: I1125 09:05:30.019454 5043 scope.go:117] "RemoveContainer" containerID="462a2b0615af67fc9be89836d5e216ab6ca98035dbc0cabff79151cf1e99d3fe" Nov 25 09:05:30 crc kubenswrapper[5043]: I1125 09:05:30.023281 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nmwkf"] Nov 25 09:05:30 crc kubenswrapper[5043]: I1125 09:05:30.059421 5043 scope.go:117] "RemoveContainer" containerID="ff88edf028c8b99df847474833269e5836f8343ad1aef587ab7b62d4edb6ce40" Nov 25 09:05:30 crc kubenswrapper[5043]: I1125 09:05:30.092718 5043 scope.go:117] "RemoveContainer" containerID="2a520265f29074e0d5b141c42e61bdf12ceef20d7c7713faf5427d0ebffed1c5" Nov 25 09:05:30 crc kubenswrapper[5043]: E1125 09:05:30.095426 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a520265f29074e0d5b141c42e61bdf12ceef20d7c7713faf5427d0ebffed1c5\": container with ID starting with 2a520265f29074e0d5b141c42e61bdf12ceef20d7c7713faf5427d0ebffed1c5 not found: ID does not exist" containerID="2a520265f29074e0d5b141c42e61bdf12ceef20d7c7713faf5427d0ebffed1c5" Nov 25 09:05:30 crc kubenswrapper[5043]: I1125 09:05:30.095474 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a520265f29074e0d5b141c42e61bdf12ceef20d7c7713faf5427d0ebffed1c5"} err="failed to get container status \"2a520265f29074e0d5b141c42e61bdf12ceef20d7c7713faf5427d0ebffed1c5\": rpc error: code = NotFound desc = could not find container \"2a520265f29074e0d5b141c42e61bdf12ceef20d7c7713faf5427d0ebffed1c5\": container with ID starting with 2a520265f29074e0d5b141c42e61bdf12ceef20d7c7713faf5427d0ebffed1c5 not found: ID does not exist" Nov 25 09:05:30 crc kubenswrapper[5043]: I1125 09:05:30.095502 5043 scope.go:117] "RemoveContainer" containerID="462a2b0615af67fc9be89836d5e216ab6ca98035dbc0cabff79151cf1e99d3fe" Nov 25 09:05:30 crc kubenswrapper[5043]: E1125 09:05:30.095911 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"462a2b0615af67fc9be89836d5e216ab6ca98035dbc0cabff79151cf1e99d3fe\": container with ID starting with 462a2b0615af67fc9be89836d5e216ab6ca98035dbc0cabff79151cf1e99d3fe not found: ID does not exist" containerID="462a2b0615af67fc9be89836d5e216ab6ca98035dbc0cabff79151cf1e99d3fe" Nov 25 09:05:30 crc kubenswrapper[5043]: I1125 09:05:30.095935 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"462a2b0615af67fc9be89836d5e216ab6ca98035dbc0cabff79151cf1e99d3fe"} err="failed to get container status \"462a2b0615af67fc9be89836d5e216ab6ca98035dbc0cabff79151cf1e99d3fe\": rpc error: code = NotFound desc = could not find container \"462a2b0615af67fc9be89836d5e216ab6ca98035dbc0cabff79151cf1e99d3fe\": container with ID starting with 462a2b0615af67fc9be89836d5e216ab6ca98035dbc0cabff79151cf1e99d3fe not found: ID does not exist" Nov 25 09:05:30 crc kubenswrapper[5043]: I1125 09:05:30.095948 5043 scope.go:117] "RemoveContainer" containerID="ff88edf028c8b99df847474833269e5836f8343ad1aef587ab7b62d4edb6ce40" Nov 25 09:05:30 crc kubenswrapper[5043]: E1125 09:05:30.096434 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff88edf028c8b99df847474833269e5836f8343ad1aef587ab7b62d4edb6ce40\": container with ID starting with ff88edf028c8b99df847474833269e5836f8343ad1aef587ab7b62d4edb6ce40 not found: ID does not exist" containerID="ff88edf028c8b99df847474833269e5836f8343ad1aef587ab7b62d4edb6ce40" Nov 25 09:05:30 crc kubenswrapper[5043]: I1125 09:05:30.096483 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff88edf028c8b99df847474833269e5836f8343ad1aef587ab7b62d4edb6ce40"} err="failed to get container status \"ff88edf028c8b99df847474833269e5836f8343ad1aef587ab7b62d4edb6ce40\": rpc error: code = NotFound desc = could not find container \"ff88edf028c8b99df847474833269e5836f8343ad1aef587ab7b62d4edb6ce40\": container with ID starting with ff88edf028c8b99df847474833269e5836f8343ad1aef587ab7b62d4edb6ce40 not found: ID does not exist" Nov 25 09:05:30 crc kubenswrapper[5043]: I1125 09:05:30.976813 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9319523-33a1-47e0-9db1-406088fe12f4" path="/var/lib/kubelet/pods/c9319523-33a1-47e0-9db1-406088fe12f4/volumes" Nov 25 09:05:43 crc kubenswrapper[5043]: I1125 09:05:43.963414 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:05:43 crc kubenswrapper[5043]: E1125 09:05:43.964440 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:05:58 crc kubenswrapper[5043]: I1125 09:05:58.964357 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:05:58 crc kubenswrapper[5043]: E1125 09:05:58.965228 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:06:09 crc kubenswrapper[5043]: I1125 09:06:09.963067 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:06:09 crc kubenswrapper[5043]: E1125 09:06:09.964275 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:06:21 crc kubenswrapper[5043]: I1125 09:06:21.962868 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:06:21 crc kubenswrapper[5043]: E1125 09:06:21.964878 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:06:34 crc kubenswrapper[5043]: I1125 09:06:34.963282 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:06:34 crc kubenswrapper[5043]: E1125 09:06:34.964103 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:06:46 crc kubenswrapper[5043]: I1125 09:06:46.971268 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:06:46 crc kubenswrapper[5043]: E1125 09:06:46.972135 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:06:57 crc kubenswrapper[5043]: I1125 09:06:57.962541 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:06:58 crc kubenswrapper[5043]: I1125 09:06:58.844372 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"1479e40cdacc7fddfe42763510ee2eaeac96105f906a9154857042c4a011fdea"} Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.001736 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f2wsv"] Nov 25 09:09:00 crc kubenswrapper[5043]: E1125 09:09:00.002752 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9319523-33a1-47e0-9db1-406088fe12f4" containerName="extract-utilities" Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.002770 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9319523-33a1-47e0-9db1-406088fe12f4" containerName="extract-utilities" Nov 25 09:09:00 crc kubenswrapper[5043]: E1125 09:09:00.002812 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9319523-33a1-47e0-9db1-406088fe12f4" containerName="registry-server" Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.002821 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9319523-33a1-47e0-9db1-406088fe12f4" containerName="registry-server" Nov 25 09:09:00 crc kubenswrapper[5043]: E1125 09:09:00.002838 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9319523-33a1-47e0-9db1-406088fe12f4" containerName="extract-content" Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.002847 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9319523-33a1-47e0-9db1-406088fe12f4" containerName="extract-content" Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.003095 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9319523-33a1-47e0-9db1-406088fe12f4" containerName="registry-server" Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.004794 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f2wsv" Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.021882 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f2wsv"] Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.109977 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp7hv\" (UniqueName: \"kubernetes.io/projected/5681979d-e72e-4551-9f58-0cd780611873-kube-api-access-wp7hv\") pod \"community-operators-f2wsv\" (UID: \"5681979d-e72e-4551-9f58-0cd780611873\") " pod="openshift-marketplace/community-operators-f2wsv" Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.110150 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5681979d-e72e-4551-9f58-0cd780611873-utilities\") pod \"community-operators-f2wsv\" (UID: \"5681979d-e72e-4551-9f58-0cd780611873\") " pod="openshift-marketplace/community-operators-f2wsv" Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.110733 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5681979d-e72e-4551-9f58-0cd780611873-catalog-content\") pod \"community-operators-f2wsv\" (UID: \"5681979d-e72e-4551-9f58-0cd780611873\") " pod="openshift-marketplace/community-operators-f2wsv" Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.213262 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5681979d-e72e-4551-9f58-0cd780611873-utilities\") pod \"community-operators-f2wsv\" (UID: \"5681979d-e72e-4551-9f58-0cd780611873\") " pod="openshift-marketplace/community-operators-f2wsv" Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.213342 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5681979d-e72e-4551-9f58-0cd780611873-catalog-content\") pod \"community-operators-f2wsv\" (UID: \"5681979d-e72e-4551-9f58-0cd780611873\") " pod="openshift-marketplace/community-operators-f2wsv" Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.213556 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp7hv\" (UniqueName: \"kubernetes.io/projected/5681979d-e72e-4551-9f58-0cd780611873-kube-api-access-wp7hv\") pod \"community-operators-f2wsv\" (UID: \"5681979d-e72e-4551-9f58-0cd780611873\") " pod="openshift-marketplace/community-operators-f2wsv" Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.213890 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5681979d-e72e-4551-9f58-0cd780611873-utilities\") pod \"community-operators-f2wsv\" (UID: \"5681979d-e72e-4551-9f58-0cd780611873\") " pod="openshift-marketplace/community-operators-f2wsv" Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.213986 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5681979d-e72e-4551-9f58-0cd780611873-catalog-content\") pod \"community-operators-f2wsv\" (UID: \"5681979d-e72e-4551-9f58-0cd780611873\") " pod="openshift-marketplace/community-operators-f2wsv" Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.245907 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp7hv\" (UniqueName: \"kubernetes.io/projected/5681979d-e72e-4551-9f58-0cd780611873-kube-api-access-wp7hv\") pod \"community-operators-f2wsv\" (UID: \"5681979d-e72e-4551-9f58-0cd780611873\") " pod="openshift-marketplace/community-operators-f2wsv" Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.332096 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f2wsv" Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.864306 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f2wsv"] Nov 25 09:09:00 crc kubenswrapper[5043]: I1125 09:09:00.979195 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2wsv" event={"ID":"5681979d-e72e-4551-9f58-0cd780611873","Type":"ContainerStarted","Data":"64bb19a20cfd1d71d44e0009c3375c2438d11470a3087361dbd6ecaa0fac1cf3"} Nov 25 09:09:01 crc kubenswrapper[5043]: I1125 09:09:01.995505 5043 generic.go:334] "Generic (PLEG): container finished" podID="5681979d-e72e-4551-9f58-0cd780611873" containerID="b9350638d78e284eaa6af0dcc039bbae954c0c7fa707e6b616460fff1ae73a4f" exitCode=0 Nov 25 09:09:01 crc kubenswrapper[5043]: I1125 09:09:01.995550 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2wsv" event={"ID":"5681979d-e72e-4551-9f58-0cd780611873","Type":"ContainerDied","Data":"b9350638d78e284eaa6af0dcc039bbae954c0c7fa707e6b616460fff1ae73a4f"} Nov 25 09:09:02 crc kubenswrapper[5043]: I1125 09:09:02.798913 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r8bkv"] Nov 25 09:09:02 crc kubenswrapper[5043]: I1125 09:09:02.801480 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r8bkv" Nov 25 09:09:02 crc kubenswrapper[5043]: I1125 09:09:02.813674 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r8bkv"] Nov 25 09:09:02 crc kubenswrapper[5043]: I1125 09:09:02.866127 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-catalog-content\") pod \"redhat-marketplace-r8bkv\" (UID: \"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8\") " pod="openshift-marketplace/redhat-marketplace-r8bkv" Nov 25 09:09:02 crc kubenswrapper[5043]: I1125 09:09:02.866224 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-utilities\") pod \"redhat-marketplace-r8bkv\" (UID: \"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8\") " pod="openshift-marketplace/redhat-marketplace-r8bkv" Nov 25 09:09:02 crc kubenswrapper[5043]: I1125 09:09:02.866318 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpsdj\" (UniqueName: \"kubernetes.io/projected/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-kube-api-access-hpsdj\") pod \"redhat-marketplace-r8bkv\" (UID: \"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8\") " pod="openshift-marketplace/redhat-marketplace-r8bkv" Nov 25 09:09:02 crc kubenswrapper[5043]: I1125 09:09:02.974299 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-catalog-content\") pod \"redhat-marketplace-r8bkv\" (UID: \"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8\") " pod="openshift-marketplace/redhat-marketplace-r8bkv" Nov 25 09:09:02 crc kubenswrapper[5043]: I1125 09:09:02.974483 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-utilities\") pod \"redhat-marketplace-r8bkv\" (UID: \"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8\") " pod="openshift-marketplace/redhat-marketplace-r8bkv" Nov 25 09:09:02 crc kubenswrapper[5043]: I1125 09:09:02.974717 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpsdj\" (UniqueName: \"kubernetes.io/projected/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-kube-api-access-hpsdj\") pod \"redhat-marketplace-r8bkv\" (UID: \"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8\") " pod="openshift-marketplace/redhat-marketplace-r8bkv" Nov 25 09:09:02 crc kubenswrapper[5043]: I1125 09:09:02.974951 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-catalog-content\") pod \"redhat-marketplace-r8bkv\" (UID: \"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8\") " pod="openshift-marketplace/redhat-marketplace-r8bkv" Nov 25 09:09:02 crc kubenswrapper[5043]: I1125 09:09:02.975062 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-utilities\") pod \"redhat-marketplace-r8bkv\" (UID: \"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8\") " pod="openshift-marketplace/redhat-marketplace-r8bkv" Nov 25 09:09:03 crc kubenswrapper[5043]: I1125 09:09:03.003228 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpsdj\" (UniqueName: \"kubernetes.io/projected/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-kube-api-access-hpsdj\") pod \"redhat-marketplace-r8bkv\" (UID: \"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8\") " pod="openshift-marketplace/redhat-marketplace-r8bkv" Nov 25 09:09:03 crc kubenswrapper[5043]: I1125 09:09:03.121475 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r8bkv" Nov 25 09:09:03 crc kubenswrapper[5043]: I1125 09:09:03.618209 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r8bkv"] Nov 25 09:09:04 crc kubenswrapper[5043]: I1125 09:09:04.018282 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8bkv" event={"ID":"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8","Type":"ContainerStarted","Data":"1c79bfae1e214689123522394a99813eb4d5af6c45b10523245758b328eda093"} Nov 25 09:09:04 crc kubenswrapper[5043]: I1125 09:09:04.018338 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8bkv" event={"ID":"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8","Type":"ContainerStarted","Data":"cd971e62077d1abf4ddb85d639625289f722784c422a0dc5782ec5cb89f45525"} Nov 25 09:09:04 crc kubenswrapper[5043]: I1125 09:09:04.021172 5043 generic.go:334] "Generic (PLEG): container finished" podID="5681979d-e72e-4551-9f58-0cd780611873" containerID="810170a8bda858d6a21d75cd894c230f1ea70dd484ea4fd8b5e910c9721607a6" exitCode=0 Nov 25 09:09:04 crc kubenswrapper[5043]: I1125 09:09:04.021211 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2wsv" event={"ID":"5681979d-e72e-4551-9f58-0cd780611873","Type":"ContainerDied","Data":"810170a8bda858d6a21d75cd894c230f1ea70dd484ea4fd8b5e910c9721607a6"} Nov 25 09:09:05 crc kubenswrapper[5043]: I1125 09:09:05.037754 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2wsv" event={"ID":"5681979d-e72e-4551-9f58-0cd780611873","Type":"ContainerStarted","Data":"5628b053dee7db6803c17430d9573b787bf379857d30093888ab3267a93a856e"} Nov 25 09:09:05 crc kubenswrapper[5043]: I1125 09:09:05.044043 5043 generic.go:334] "Generic (PLEG): container finished" podID="5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8" containerID="1c79bfae1e214689123522394a99813eb4d5af6c45b10523245758b328eda093" exitCode=0 Nov 25 09:09:05 crc kubenswrapper[5043]: I1125 09:09:05.044112 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8bkv" event={"ID":"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8","Type":"ContainerDied","Data":"1c79bfae1e214689123522394a99813eb4d5af6c45b10523245758b328eda093"} Nov 25 09:09:06 crc kubenswrapper[5043]: I1125 09:09:06.081330 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f2wsv" podStartSLOduration=4.297180765 podStartE2EDuration="7.08130766s" podCreationTimestamp="2025-11-25 09:08:59 +0000 UTC" firstStartedPulling="2025-11-25 09:09:01.999772245 +0000 UTC m=+6806.167967966" lastFinishedPulling="2025-11-25 09:09:04.78389915 +0000 UTC m=+6808.952094861" observedRunningTime="2025-11-25 09:09:06.078236957 +0000 UTC m=+6810.246432738" watchObservedRunningTime="2025-11-25 09:09:06.08130766 +0000 UTC m=+6810.249503381" Nov 25 09:09:07 crc kubenswrapper[5043]: I1125 09:09:07.067512 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8bkv" event={"ID":"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8","Type":"ContainerStarted","Data":"6bb2b2119ed4ede36c00dc3bf78f9d5220a7623107e39de5d942172b9972843f"} Nov 25 09:09:08 crc kubenswrapper[5043]: I1125 09:09:08.079914 5043 generic.go:334] "Generic (PLEG): container finished" podID="5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8" containerID="6bb2b2119ed4ede36c00dc3bf78f9d5220a7623107e39de5d942172b9972843f" exitCode=0 Nov 25 09:09:08 crc kubenswrapper[5043]: I1125 09:09:08.079970 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8bkv" event={"ID":"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8","Type":"ContainerDied","Data":"6bb2b2119ed4ede36c00dc3bf78f9d5220a7623107e39de5d942172b9972843f"} Nov 25 09:09:10 crc kubenswrapper[5043]: I1125 09:09:10.104941 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8bkv" event={"ID":"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8","Type":"ContainerStarted","Data":"07ca5aeee91a3ce169851b234b02c91c22f89786936e9b3ab611289673e3d6bb"} Nov 25 09:09:10 crc kubenswrapper[5043]: I1125 09:09:10.333130 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f2wsv" Nov 25 09:09:10 crc kubenswrapper[5043]: I1125 09:09:10.333196 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f2wsv" Nov 25 09:09:10 crc kubenswrapper[5043]: I1125 09:09:10.408539 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f2wsv" Nov 25 09:09:11 crc kubenswrapper[5043]: I1125 09:09:11.140421 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r8bkv" podStartSLOduration=4.578043487 podStartE2EDuration="9.14039897s" podCreationTimestamp="2025-11-25 09:09:02 +0000 UTC" firstStartedPulling="2025-11-25 09:09:05.046042331 +0000 UTC m=+6809.214238092" lastFinishedPulling="2025-11-25 09:09:09.608397854 +0000 UTC m=+6813.776593575" observedRunningTime="2025-11-25 09:09:11.137528563 +0000 UTC m=+6815.305724294" watchObservedRunningTime="2025-11-25 09:09:11.14039897 +0000 UTC m=+6815.308594691" Nov 25 09:09:11 crc kubenswrapper[5043]: I1125 09:09:11.180868 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f2wsv" Nov 25 09:09:12 crc kubenswrapper[5043]: I1125 09:09:12.390704 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f2wsv"] Nov 25 09:09:13 crc kubenswrapper[5043]: I1125 09:09:13.122744 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r8bkv" Nov 25 09:09:13 crc kubenswrapper[5043]: I1125 09:09:13.122837 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r8bkv" Nov 25 09:09:13 crc kubenswrapper[5043]: I1125 09:09:13.135207 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f2wsv" podUID="5681979d-e72e-4551-9f58-0cd780611873" containerName="registry-server" containerID="cri-o://5628b053dee7db6803c17430d9573b787bf379857d30093888ab3267a93a856e" gracePeriod=2 Nov 25 09:09:13 crc kubenswrapper[5043]: I1125 09:09:13.189659 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r8bkv" Nov 25 09:09:13 crc kubenswrapper[5043]: I1125 09:09:13.682406 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f2wsv" Nov 25 09:09:13 crc kubenswrapper[5043]: I1125 09:09:13.811350 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp7hv\" (UniqueName: \"kubernetes.io/projected/5681979d-e72e-4551-9f58-0cd780611873-kube-api-access-wp7hv\") pod \"5681979d-e72e-4551-9f58-0cd780611873\" (UID: \"5681979d-e72e-4551-9f58-0cd780611873\") " Nov 25 09:09:13 crc kubenswrapper[5043]: I1125 09:09:13.811744 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5681979d-e72e-4551-9f58-0cd780611873-catalog-content\") pod \"5681979d-e72e-4551-9f58-0cd780611873\" (UID: \"5681979d-e72e-4551-9f58-0cd780611873\") " Nov 25 09:09:13 crc kubenswrapper[5043]: I1125 09:09:13.812099 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5681979d-e72e-4551-9f58-0cd780611873-utilities\") pod \"5681979d-e72e-4551-9f58-0cd780611873\" (UID: \"5681979d-e72e-4551-9f58-0cd780611873\") " Nov 25 09:09:13 crc kubenswrapper[5043]: I1125 09:09:13.813092 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5681979d-e72e-4551-9f58-0cd780611873-utilities" (OuterVolumeSpecName: "utilities") pod "5681979d-e72e-4551-9f58-0cd780611873" (UID: "5681979d-e72e-4551-9f58-0cd780611873"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:09:13 crc kubenswrapper[5043]: I1125 09:09:13.817191 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5681979d-e72e-4551-9f58-0cd780611873-kube-api-access-wp7hv" (OuterVolumeSpecName: "kube-api-access-wp7hv") pod "5681979d-e72e-4551-9f58-0cd780611873" (UID: "5681979d-e72e-4551-9f58-0cd780611873"). InnerVolumeSpecName "kube-api-access-wp7hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:09:13 crc kubenswrapper[5043]: I1125 09:09:13.869831 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5681979d-e72e-4551-9f58-0cd780611873-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5681979d-e72e-4551-9f58-0cd780611873" (UID: "5681979d-e72e-4551-9f58-0cd780611873"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:09:13 crc kubenswrapper[5043]: I1125 09:09:13.914040 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5681979d-e72e-4551-9f58-0cd780611873-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:09:13 crc kubenswrapper[5043]: I1125 09:09:13.914086 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp7hv\" (UniqueName: \"kubernetes.io/projected/5681979d-e72e-4551-9f58-0cd780611873-kube-api-access-wp7hv\") on node \"crc\" DevicePath \"\"" Nov 25 09:09:13 crc kubenswrapper[5043]: I1125 09:09:13.914098 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5681979d-e72e-4551-9f58-0cd780611873-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:09:14 crc kubenswrapper[5043]: I1125 09:09:14.148510 5043 generic.go:334] "Generic (PLEG): container finished" podID="5681979d-e72e-4551-9f58-0cd780611873" containerID="5628b053dee7db6803c17430d9573b787bf379857d30093888ab3267a93a856e" exitCode=0 Nov 25 09:09:14 crc kubenswrapper[5043]: I1125 09:09:14.148576 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f2wsv" Nov 25 09:09:14 crc kubenswrapper[5043]: I1125 09:09:14.148633 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2wsv" event={"ID":"5681979d-e72e-4551-9f58-0cd780611873","Type":"ContainerDied","Data":"5628b053dee7db6803c17430d9573b787bf379857d30093888ab3267a93a856e"} Nov 25 09:09:14 crc kubenswrapper[5043]: I1125 09:09:14.148680 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f2wsv" event={"ID":"5681979d-e72e-4551-9f58-0cd780611873","Type":"ContainerDied","Data":"64bb19a20cfd1d71d44e0009c3375c2438d11470a3087361dbd6ecaa0fac1cf3"} Nov 25 09:09:14 crc kubenswrapper[5043]: I1125 09:09:14.148705 5043 scope.go:117] "RemoveContainer" containerID="5628b053dee7db6803c17430d9573b787bf379857d30093888ab3267a93a856e" Nov 25 09:09:14 crc kubenswrapper[5043]: I1125 09:09:14.174429 5043 scope.go:117] "RemoveContainer" containerID="810170a8bda858d6a21d75cd894c230f1ea70dd484ea4fd8b5e910c9721607a6" Nov 25 09:09:14 crc kubenswrapper[5043]: I1125 09:09:14.191195 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f2wsv"] Nov 25 09:09:14 crc kubenswrapper[5043]: I1125 09:09:14.206470 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f2wsv"] Nov 25 09:09:14 crc kubenswrapper[5043]: I1125 09:09:14.212105 5043 scope.go:117] "RemoveContainer" containerID="b9350638d78e284eaa6af0dcc039bbae954c0c7fa707e6b616460fff1ae73a4f" Nov 25 09:09:14 crc kubenswrapper[5043]: I1125 09:09:14.221726 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r8bkv" Nov 25 09:09:14 crc kubenswrapper[5043]: I1125 09:09:14.250571 5043 scope.go:117] "RemoveContainer" containerID="5628b053dee7db6803c17430d9573b787bf379857d30093888ab3267a93a856e" Nov 25 09:09:14 crc kubenswrapper[5043]: E1125 09:09:14.253031 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5628b053dee7db6803c17430d9573b787bf379857d30093888ab3267a93a856e\": container with ID starting with 5628b053dee7db6803c17430d9573b787bf379857d30093888ab3267a93a856e not found: ID does not exist" containerID="5628b053dee7db6803c17430d9573b787bf379857d30093888ab3267a93a856e" Nov 25 09:09:14 crc kubenswrapper[5043]: I1125 09:09:14.253272 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5628b053dee7db6803c17430d9573b787bf379857d30093888ab3267a93a856e"} err="failed to get container status \"5628b053dee7db6803c17430d9573b787bf379857d30093888ab3267a93a856e\": rpc error: code = NotFound desc = could not find container \"5628b053dee7db6803c17430d9573b787bf379857d30093888ab3267a93a856e\": container with ID starting with 5628b053dee7db6803c17430d9573b787bf379857d30093888ab3267a93a856e not found: ID does not exist" Nov 25 09:09:14 crc kubenswrapper[5043]: I1125 09:09:14.253305 5043 scope.go:117] "RemoveContainer" containerID="810170a8bda858d6a21d75cd894c230f1ea70dd484ea4fd8b5e910c9721607a6" Nov 25 09:09:14 crc kubenswrapper[5043]: E1125 09:09:14.254015 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"810170a8bda858d6a21d75cd894c230f1ea70dd484ea4fd8b5e910c9721607a6\": container with ID starting with 810170a8bda858d6a21d75cd894c230f1ea70dd484ea4fd8b5e910c9721607a6 not found: ID does not exist" containerID="810170a8bda858d6a21d75cd894c230f1ea70dd484ea4fd8b5e910c9721607a6" Nov 25 09:09:14 crc kubenswrapper[5043]: I1125 09:09:14.254047 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"810170a8bda858d6a21d75cd894c230f1ea70dd484ea4fd8b5e910c9721607a6"} err="failed to get container status \"810170a8bda858d6a21d75cd894c230f1ea70dd484ea4fd8b5e910c9721607a6\": rpc error: code = NotFound desc = could not find container \"810170a8bda858d6a21d75cd894c230f1ea70dd484ea4fd8b5e910c9721607a6\": container with ID starting with 810170a8bda858d6a21d75cd894c230f1ea70dd484ea4fd8b5e910c9721607a6 not found: ID does not exist" Nov 25 09:09:14 crc kubenswrapper[5043]: I1125 09:09:14.254065 5043 scope.go:117] "RemoveContainer" containerID="b9350638d78e284eaa6af0dcc039bbae954c0c7fa707e6b616460fff1ae73a4f" Nov 25 09:09:14 crc kubenswrapper[5043]: E1125 09:09:14.254332 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9350638d78e284eaa6af0dcc039bbae954c0c7fa707e6b616460fff1ae73a4f\": container with ID starting with b9350638d78e284eaa6af0dcc039bbae954c0c7fa707e6b616460fff1ae73a4f not found: ID does not exist" containerID="b9350638d78e284eaa6af0dcc039bbae954c0c7fa707e6b616460fff1ae73a4f" Nov 25 09:09:14 crc kubenswrapper[5043]: I1125 09:09:14.254363 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9350638d78e284eaa6af0dcc039bbae954c0c7fa707e6b616460fff1ae73a4f"} err="failed to get container status \"b9350638d78e284eaa6af0dcc039bbae954c0c7fa707e6b616460fff1ae73a4f\": rpc error: code = NotFound desc = could not find container \"b9350638d78e284eaa6af0dcc039bbae954c0c7fa707e6b616460fff1ae73a4f\": container with ID starting with b9350638d78e284eaa6af0dcc039bbae954c0c7fa707e6b616460fff1ae73a4f not found: ID does not exist" Nov 25 09:09:14 crc kubenswrapper[5043]: I1125 09:09:14.976171 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5681979d-e72e-4551-9f58-0cd780611873" path="/var/lib/kubelet/pods/5681979d-e72e-4551-9f58-0cd780611873/volumes" Nov 25 09:09:15 crc kubenswrapper[5043]: I1125 09:09:15.588926 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r8bkv"] Nov 25 09:09:16 crc kubenswrapper[5043]: I1125 09:09:16.175178 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r8bkv" podUID="5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8" containerName="registry-server" containerID="cri-o://07ca5aeee91a3ce169851b234b02c91c22f89786936e9b3ab611289673e3d6bb" gracePeriod=2 Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.029589 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r8bkv" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.081259 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpsdj\" (UniqueName: \"kubernetes.io/projected/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-kube-api-access-hpsdj\") pod \"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8\" (UID: \"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8\") " Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.081402 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-catalog-content\") pod \"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8\" (UID: \"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8\") " Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.081465 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-utilities\") pod \"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8\" (UID: \"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8\") " Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.082428 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-utilities" (OuterVolumeSpecName: "utilities") pod "5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8" (UID: "5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.090851 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-kube-api-access-hpsdj" (OuterVolumeSpecName: "kube-api-access-hpsdj") pod "5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8" (UID: "5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8"). InnerVolumeSpecName "kube-api-access-hpsdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.184158 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.184202 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpsdj\" (UniqueName: \"kubernetes.io/projected/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-kube-api-access-hpsdj\") on node \"crc\" DevicePath \"\"" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.185093 5043 generic.go:334] "Generic (PLEG): container finished" podID="5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8" containerID="07ca5aeee91a3ce169851b234b02c91c22f89786936e9b3ab611289673e3d6bb" exitCode=0 Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.185130 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8bkv" event={"ID":"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8","Type":"ContainerDied","Data":"07ca5aeee91a3ce169851b234b02c91c22f89786936e9b3ab611289673e3d6bb"} Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.185156 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r8bkv" event={"ID":"5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8","Type":"ContainerDied","Data":"cd971e62077d1abf4ddb85d639625289f722784c422a0dc5782ec5cb89f45525"} Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.185173 5043 scope.go:117] "RemoveContainer" containerID="07ca5aeee91a3ce169851b234b02c91c22f89786936e9b3ab611289673e3d6bb" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.185197 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r8bkv" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.205894 5043 scope.go:117] "RemoveContainer" containerID="6bb2b2119ed4ede36c00dc3bf78f9d5220a7623107e39de5d942172b9972843f" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.228312 5043 scope.go:117] "RemoveContainer" containerID="1c79bfae1e214689123522394a99813eb4d5af6c45b10523245758b328eda093" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.272600 5043 scope.go:117] "RemoveContainer" containerID="07ca5aeee91a3ce169851b234b02c91c22f89786936e9b3ab611289673e3d6bb" Nov 25 09:09:17 crc kubenswrapper[5043]: E1125 09:09:17.272974 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07ca5aeee91a3ce169851b234b02c91c22f89786936e9b3ab611289673e3d6bb\": container with ID starting with 07ca5aeee91a3ce169851b234b02c91c22f89786936e9b3ab611289673e3d6bb not found: ID does not exist" containerID="07ca5aeee91a3ce169851b234b02c91c22f89786936e9b3ab611289673e3d6bb" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.273006 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07ca5aeee91a3ce169851b234b02c91c22f89786936e9b3ab611289673e3d6bb"} err="failed to get container status \"07ca5aeee91a3ce169851b234b02c91c22f89786936e9b3ab611289673e3d6bb\": rpc error: code = NotFound desc = could not find container \"07ca5aeee91a3ce169851b234b02c91c22f89786936e9b3ab611289673e3d6bb\": container with ID starting with 07ca5aeee91a3ce169851b234b02c91c22f89786936e9b3ab611289673e3d6bb not found: ID does not exist" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.273030 5043 scope.go:117] "RemoveContainer" containerID="6bb2b2119ed4ede36c00dc3bf78f9d5220a7623107e39de5d942172b9972843f" Nov 25 09:09:17 crc kubenswrapper[5043]: E1125 09:09:17.273380 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bb2b2119ed4ede36c00dc3bf78f9d5220a7623107e39de5d942172b9972843f\": container with ID starting with 6bb2b2119ed4ede36c00dc3bf78f9d5220a7623107e39de5d942172b9972843f not found: ID does not exist" containerID="6bb2b2119ed4ede36c00dc3bf78f9d5220a7623107e39de5d942172b9972843f" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.273423 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bb2b2119ed4ede36c00dc3bf78f9d5220a7623107e39de5d942172b9972843f"} err="failed to get container status \"6bb2b2119ed4ede36c00dc3bf78f9d5220a7623107e39de5d942172b9972843f\": rpc error: code = NotFound desc = could not find container \"6bb2b2119ed4ede36c00dc3bf78f9d5220a7623107e39de5d942172b9972843f\": container with ID starting with 6bb2b2119ed4ede36c00dc3bf78f9d5220a7623107e39de5d942172b9972843f not found: ID does not exist" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.273455 5043 scope.go:117] "RemoveContainer" containerID="1c79bfae1e214689123522394a99813eb4d5af6c45b10523245758b328eda093" Nov 25 09:09:17 crc kubenswrapper[5043]: E1125 09:09:17.273812 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c79bfae1e214689123522394a99813eb4d5af6c45b10523245758b328eda093\": container with ID starting with 1c79bfae1e214689123522394a99813eb4d5af6c45b10523245758b328eda093 not found: ID does not exist" containerID="1c79bfae1e214689123522394a99813eb4d5af6c45b10523245758b328eda093" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.273859 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c79bfae1e214689123522394a99813eb4d5af6c45b10523245758b328eda093"} err="failed to get container status \"1c79bfae1e214689123522394a99813eb4d5af6c45b10523245758b328eda093\": rpc error: code = NotFound desc = could not find container \"1c79bfae1e214689123522394a99813eb4d5af6c45b10523245758b328eda093\": container with ID starting with 1c79bfae1e214689123522394a99813eb4d5af6c45b10523245758b328eda093 not found: ID does not exist" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.276389 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.276443 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.476380 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8" (UID: "5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.491110 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.533977 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r8bkv"] Nov 25 09:09:17 crc kubenswrapper[5043]: I1125 09:09:17.544704 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r8bkv"] Nov 25 09:09:18 crc kubenswrapper[5043]: I1125 09:09:18.974102 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8" path="/var/lib/kubelet/pods/5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8/volumes" Nov 25 09:09:47 crc kubenswrapper[5043]: I1125 09:09:47.276553 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:09:47 crc kubenswrapper[5043]: I1125 09:09:47.277433 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:10:17 crc kubenswrapper[5043]: I1125 09:10:17.276108 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:10:17 crc kubenswrapper[5043]: I1125 09:10:17.276532 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:10:17 crc kubenswrapper[5043]: I1125 09:10:17.276585 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 09:10:17 crc kubenswrapper[5043]: I1125 09:10:17.277281 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1479e40cdacc7fddfe42763510ee2eaeac96105f906a9154857042c4a011fdea"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:10:17 crc kubenswrapper[5043]: I1125 09:10:17.277337 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://1479e40cdacc7fddfe42763510ee2eaeac96105f906a9154857042c4a011fdea" gracePeriod=600 Nov 25 09:10:17 crc kubenswrapper[5043]: I1125 09:10:17.828464 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="1479e40cdacc7fddfe42763510ee2eaeac96105f906a9154857042c4a011fdea" exitCode=0 Nov 25 09:10:17 crc kubenswrapper[5043]: I1125 09:10:17.828548 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"1479e40cdacc7fddfe42763510ee2eaeac96105f906a9154857042c4a011fdea"} Nov 25 09:10:17 crc kubenswrapper[5043]: I1125 09:10:17.828901 5043 scope.go:117] "RemoveContainer" containerID="7428329feb9ef075d4dcdd09558ad0fa43dfe64cd93d4268043591869d78b2be" Nov 25 09:10:18 crc kubenswrapper[5043]: I1125 09:10:18.843073 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48"} Nov 25 09:12:47 crc kubenswrapper[5043]: I1125 09:12:47.276549 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:12:47 crc kubenswrapper[5043]: I1125 09:12:47.277675 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.072348 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vvqcd"] Nov 25 09:13:09 crc kubenswrapper[5043]: E1125 09:13:09.073242 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8" containerName="extract-content" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.073255 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8" containerName="extract-content" Nov 25 09:13:09 crc kubenswrapper[5043]: E1125 09:13:09.073280 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8" containerName="extract-utilities" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.073286 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8" containerName="extract-utilities" Nov 25 09:13:09 crc kubenswrapper[5043]: E1125 09:13:09.073301 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5681979d-e72e-4551-9f58-0cd780611873" containerName="extract-utilities" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.073308 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="5681979d-e72e-4551-9f58-0cd780611873" containerName="extract-utilities" Nov 25 09:13:09 crc kubenswrapper[5043]: E1125 09:13:09.073321 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5681979d-e72e-4551-9f58-0cd780611873" containerName="registry-server" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.073326 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="5681979d-e72e-4551-9f58-0cd780611873" containerName="registry-server" Nov 25 09:13:09 crc kubenswrapper[5043]: E1125 09:13:09.073336 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5681979d-e72e-4551-9f58-0cd780611873" containerName="extract-content" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.073342 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="5681979d-e72e-4551-9f58-0cd780611873" containerName="extract-content" Nov 25 09:13:09 crc kubenswrapper[5043]: E1125 09:13:09.073353 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8" containerName="registry-server" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.073359 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8" containerName="registry-server" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.073524 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="5681979d-e72e-4551-9f58-0cd780611873" containerName="registry-server" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.073543 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca5f1b7-ec8a-4fec-b99d-86230fc4e4a8" containerName="registry-server" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.074922 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvqcd" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.099266 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vvqcd"] Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.159739 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5572a51a-fe59-4bca-98bf-fc482e887979-utilities\") pod \"certified-operators-vvqcd\" (UID: \"5572a51a-fe59-4bca-98bf-fc482e887979\") " pod="openshift-marketplace/certified-operators-vvqcd" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.159859 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5572a51a-fe59-4bca-98bf-fc482e887979-catalog-content\") pod \"certified-operators-vvqcd\" (UID: \"5572a51a-fe59-4bca-98bf-fc482e887979\") " pod="openshift-marketplace/certified-operators-vvqcd" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.159919 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bpx8\" (UniqueName: \"kubernetes.io/projected/5572a51a-fe59-4bca-98bf-fc482e887979-kube-api-access-2bpx8\") pod \"certified-operators-vvqcd\" (UID: \"5572a51a-fe59-4bca-98bf-fc482e887979\") " pod="openshift-marketplace/certified-operators-vvqcd" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.262678 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5572a51a-fe59-4bca-98bf-fc482e887979-utilities\") pod \"certified-operators-vvqcd\" (UID: \"5572a51a-fe59-4bca-98bf-fc482e887979\") " pod="openshift-marketplace/certified-operators-vvqcd" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.262798 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5572a51a-fe59-4bca-98bf-fc482e887979-catalog-content\") pod \"certified-operators-vvqcd\" (UID: \"5572a51a-fe59-4bca-98bf-fc482e887979\") " pod="openshift-marketplace/certified-operators-vvqcd" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.262858 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bpx8\" (UniqueName: \"kubernetes.io/projected/5572a51a-fe59-4bca-98bf-fc482e887979-kube-api-access-2bpx8\") pod \"certified-operators-vvqcd\" (UID: \"5572a51a-fe59-4bca-98bf-fc482e887979\") " pod="openshift-marketplace/certified-operators-vvqcd" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.263590 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5572a51a-fe59-4bca-98bf-fc482e887979-catalog-content\") pod \"certified-operators-vvqcd\" (UID: \"5572a51a-fe59-4bca-98bf-fc482e887979\") " pod="openshift-marketplace/certified-operators-vvqcd" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.263793 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5572a51a-fe59-4bca-98bf-fc482e887979-utilities\") pod \"certified-operators-vvqcd\" (UID: \"5572a51a-fe59-4bca-98bf-fc482e887979\") " pod="openshift-marketplace/certified-operators-vvqcd" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.283988 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bpx8\" (UniqueName: \"kubernetes.io/projected/5572a51a-fe59-4bca-98bf-fc482e887979-kube-api-access-2bpx8\") pod \"certified-operators-vvqcd\" (UID: \"5572a51a-fe59-4bca-98bf-fc482e887979\") " pod="openshift-marketplace/certified-operators-vvqcd" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.402356 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvqcd" Nov 25 09:13:09 crc kubenswrapper[5043]: I1125 09:13:09.922443 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vvqcd"] Nov 25 09:13:10 crc kubenswrapper[5043]: I1125 09:13:10.537921 5043 generic.go:334] "Generic (PLEG): container finished" podID="5572a51a-fe59-4bca-98bf-fc482e887979" containerID="3ac7b7e2eb392740e1a2d396975f5e67fd9b47876446cefd80ba489a5c374d7f" exitCode=0 Nov 25 09:13:10 crc kubenswrapper[5043]: I1125 09:13:10.537966 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvqcd" event={"ID":"5572a51a-fe59-4bca-98bf-fc482e887979","Type":"ContainerDied","Data":"3ac7b7e2eb392740e1a2d396975f5e67fd9b47876446cefd80ba489a5c374d7f"} Nov 25 09:13:10 crc kubenswrapper[5043]: I1125 09:13:10.537991 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvqcd" event={"ID":"5572a51a-fe59-4bca-98bf-fc482e887979","Type":"ContainerStarted","Data":"7b0a305fb11c383ed90764cd6217b9ffc0b979574d042eb271cd57dea1091777"} Nov 25 09:13:10 crc kubenswrapper[5043]: I1125 09:13:10.540029 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 09:13:12 crc kubenswrapper[5043]: I1125 09:13:12.555043 5043 generic.go:334] "Generic (PLEG): container finished" podID="5572a51a-fe59-4bca-98bf-fc482e887979" containerID="db672c99dfe0cceadd622115a0cae5410a7fb9e79cb703e373aa0e8677520011" exitCode=0 Nov 25 09:13:12 crc kubenswrapper[5043]: I1125 09:13:12.555116 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvqcd" event={"ID":"5572a51a-fe59-4bca-98bf-fc482e887979","Type":"ContainerDied","Data":"db672c99dfe0cceadd622115a0cae5410a7fb9e79cb703e373aa0e8677520011"} Nov 25 09:13:14 crc kubenswrapper[5043]: I1125 09:13:14.573671 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvqcd" event={"ID":"5572a51a-fe59-4bca-98bf-fc482e887979","Type":"ContainerStarted","Data":"07c240ae9416dc643f30eedda762471f7ac1c510b592433ee12ab16b9724cacd"} Nov 25 09:13:14 crc kubenswrapper[5043]: I1125 09:13:14.601287 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vvqcd" podStartSLOduration=2.854205101 podStartE2EDuration="5.601261035s" podCreationTimestamp="2025-11-25 09:13:09 +0000 UTC" firstStartedPulling="2025-11-25 09:13:10.53982011 +0000 UTC m=+7054.708015831" lastFinishedPulling="2025-11-25 09:13:13.286876034 +0000 UTC m=+7057.455071765" observedRunningTime="2025-11-25 09:13:14.587567005 +0000 UTC m=+7058.755762726" watchObservedRunningTime="2025-11-25 09:13:14.601261035 +0000 UTC m=+7058.769456756" Nov 25 09:13:17 crc kubenswrapper[5043]: I1125 09:13:17.275742 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:13:17 crc kubenswrapper[5043]: I1125 09:13:17.276011 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:13:19 crc kubenswrapper[5043]: I1125 09:13:19.601642 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vvqcd" Nov 25 09:13:19 crc kubenswrapper[5043]: I1125 09:13:19.602356 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vvqcd" Nov 25 09:13:19 crc kubenswrapper[5043]: I1125 09:13:19.684554 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vvqcd" Nov 25 09:13:19 crc kubenswrapper[5043]: I1125 09:13:19.737928 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vvqcd" Nov 25 09:13:19 crc kubenswrapper[5043]: I1125 09:13:19.929705 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vvqcd"] Nov 25 09:13:21 crc kubenswrapper[5043]: I1125 09:13:21.665305 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vvqcd" podUID="5572a51a-fe59-4bca-98bf-fc482e887979" containerName="registry-server" containerID="cri-o://07c240ae9416dc643f30eedda762471f7ac1c510b592433ee12ab16b9724cacd" gracePeriod=2 Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.231501 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvqcd" Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.357926 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5572a51a-fe59-4bca-98bf-fc482e887979-catalog-content\") pod \"5572a51a-fe59-4bca-98bf-fc482e887979\" (UID: \"5572a51a-fe59-4bca-98bf-fc482e887979\") " Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.358072 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5572a51a-fe59-4bca-98bf-fc482e887979-utilities\") pod \"5572a51a-fe59-4bca-98bf-fc482e887979\" (UID: \"5572a51a-fe59-4bca-98bf-fc482e887979\") " Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.358125 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bpx8\" (UniqueName: \"kubernetes.io/projected/5572a51a-fe59-4bca-98bf-fc482e887979-kube-api-access-2bpx8\") pod \"5572a51a-fe59-4bca-98bf-fc482e887979\" (UID: \"5572a51a-fe59-4bca-98bf-fc482e887979\") " Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.361227 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5572a51a-fe59-4bca-98bf-fc482e887979-utilities" (OuterVolumeSpecName: "utilities") pod "5572a51a-fe59-4bca-98bf-fc482e887979" (UID: "5572a51a-fe59-4bca-98bf-fc482e887979"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.364305 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5572a51a-fe59-4bca-98bf-fc482e887979-kube-api-access-2bpx8" (OuterVolumeSpecName: "kube-api-access-2bpx8") pod "5572a51a-fe59-4bca-98bf-fc482e887979" (UID: "5572a51a-fe59-4bca-98bf-fc482e887979"). InnerVolumeSpecName "kube-api-access-2bpx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.460199 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bpx8\" (UniqueName: \"kubernetes.io/projected/5572a51a-fe59-4bca-98bf-fc482e887979-kube-api-access-2bpx8\") on node \"crc\" DevicePath \"\"" Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.460241 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5572a51a-fe59-4bca-98bf-fc482e887979-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.676707 5043 generic.go:334] "Generic (PLEG): container finished" podID="5572a51a-fe59-4bca-98bf-fc482e887979" containerID="07c240ae9416dc643f30eedda762471f7ac1c510b592433ee12ab16b9724cacd" exitCode=0 Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.676801 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvqcd" Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.678747 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvqcd" event={"ID":"5572a51a-fe59-4bca-98bf-fc482e887979","Type":"ContainerDied","Data":"07c240ae9416dc643f30eedda762471f7ac1c510b592433ee12ab16b9724cacd"} Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.678807 5043 scope.go:117] "RemoveContainer" containerID="07c240ae9416dc643f30eedda762471f7ac1c510b592433ee12ab16b9724cacd" Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.678953 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvqcd" event={"ID":"5572a51a-fe59-4bca-98bf-fc482e887979","Type":"ContainerDied","Data":"7b0a305fb11c383ed90764cd6217b9ffc0b979574d042eb271cd57dea1091777"} Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.703185 5043 scope.go:117] "RemoveContainer" containerID="db672c99dfe0cceadd622115a0cae5410a7fb9e79cb703e373aa0e8677520011" Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.730255 5043 scope.go:117] "RemoveContainer" containerID="3ac7b7e2eb392740e1a2d396975f5e67fd9b47876446cefd80ba489a5c374d7f" Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.803365 5043 scope.go:117] "RemoveContainer" containerID="07c240ae9416dc643f30eedda762471f7ac1c510b592433ee12ab16b9724cacd" Nov 25 09:13:22 crc kubenswrapper[5043]: E1125 09:13:22.804117 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07c240ae9416dc643f30eedda762471f7ac1c510b592433ee12ab16b9724cacd\": container with ID starting with 07c240ae9416dc643f30eedda762471f7ac1c510b592433ee12ab16b9724cacd not found: ID does not exist" containerID="07c240ae9416dc643f30eedda762471f7ac1c510b592433ee12ab16b9724cacd" Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.804186 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07c240ae9416dc643f30eedda762471f7ac1c510b592433ee12ab16b9724cacd"} err="failed to get container status \"07c240ae9416dc643f30eedda762471f7ac1c510b592433ee12ab16b9724cacd\": rpc error: code = NotFound desc = could not find container \"07c240ae9416dc643f30eedda762471f7ac1c510b592433ee12ab16b9724cacd\": container with ID starting with 07c240ae9416dc643f30eedda762471f7ac1c510b592433ee12ab16b9724cacd not found: ID does not exist" Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.804421 5043 scope.go:117] "RemoveContainer" containerID="db672c99dfe0cceadd622115a0cae5410a7fb9e79cb703e373aa0e8677520011" Nov 25 09:13:22 crc kubenswrapper[5043]: E1125 09:13:22.804919 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db672c99dfe0cceadd622115a0cae5410a7fb9e79cb703e373aa0e8677520011\": container with ID starting with db672c99dfe0cceadd622115a0cae5410a7fb9e79cb703e373aa0e8677520011 not found: ID does not exist" containerID="db672c99dfe0cceadd622115a0cae5410a7fb9e79cb703e373aa0e8677520011" Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.805029 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db672c99dfe0cceadd622115a0cae5410a7fb9e79cb703e373aa0e8677520011"} err="failed to get container status \"db672c99dfe0cceadd622115a0cae5410a7fb9e79cb703e373aa0e8677520011\": rpc error: code = NotFound desc = could not find container \"db672c99dfe0cceadd622115a0cae5410a7fb9e79cb703e373aa0e8677520011\": container with ID starting with db672c99dfe0cceadd622115a0cae5410a7fb9e79cb703e373aa0e8677520011 not found: ID does not exist" Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.805083 5043 scope.go:117] "RemoveContainer" containerID="3ac7b7e2eb392740e1a2d396975f5e67fd9b47876446cefd80ba489a5c374d7f" Nov 25 09:13:22 crc kubenswrapper[5043]: E1125 09:13:22.805665 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ac7b7e2eb392740e1a2d396975f5e67fd9b47876446cefd80ba489a5c374d7f\": container with ID starting with 3ac7b7e2eb392740e1a2d396975f5e67fd9b47876446cefd80ba489a5c374d7f not found: ID does not exist" containerID="3ac7b7e2eb392740e1a2d396975f5e67fd9b47876446cefd80ba489a5c374d7f" Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.805712 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ac7b7e2eb392740e1a2d396975f5e67fd9b47876446cefd80ba489a5c374d7f"} err="failed to get container status \"3ac7b7e2eb392740e1a2d396975f5e67fd9b47876446cefd80ba489a5c374d7f\": rpc error: code = NotFound desc = could not find container \"3ac7b7e2eb392740e1a2d396975f5e67fd9b47876446cefd80ba489a5c374d7f\": container with ID starting with 3ac7b7e2eb392740e1a2d396975f5e67fd9b47876446cefd80ba489a5c374d7f not found: ID does not exist" Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.814307 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5572a51a-fe59-4bca-98bf-fc482e887979-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5572a51a-fe59-4bca-98bf-fc482e887979" (UID: "5572a51a-fe59-4bca-98bf-fc482e887979"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:13:22 crc kubenswrapper[5043]: I1125 09:13:22.868858 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5572a51a-fe59-4bca-98bf-fc482e887979-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:13:23 crc kubenswrapper[5043]: I1125 09:13:23.012114 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vvqcd"] Nov 25 09:13:23 crc kubenswrapper[5043]: I1125 09:13:23.021721 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vvqcd"] Nov 25 09:13:24 crc kubenswrapper[5043]: I1125 09:13:24.974501 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5572a51a-fe59-4bca-98bf-fc482e887979" path="/var/lib/kubelet/pods/5572a51a-fe59-4bca-98bf-fc482e887979/volumes" Nov 25 09:13:47 crc kubenswrapper[5043]: I1125 09:13:47.276583 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:13:47 crc kubenswrapper[5043]: I1125 09:13:47.277226 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:13:47 crc kubenswrapper[5043]: I1125 09:13:47.277302 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 09:13:47 crc kubenswrapper[5043]: I1125 09:13:47.278266 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:13:47 crc kubenswrapper[5043]: I1125 09:13:47.278360 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" gracePeriod=600 Nov 25 09:13:47 crc kubenswrapper[5043]: I1125 09:13:47.918985 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" exitCode=0 Nov 25 09:13:47 crc kubenswrapper[5043]: I1125 09:13:47.919089 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48"} Nov 25 09:13:47 crc kubenswrapper[5043]: I1125 09:13:47.919486 5043 scope.go:117] "RemoveContainer" containerID="1479e40cdacc7fddfe42763510ee2eaeac96105f906a9154857042c4a011fdea" Nov 25 09:13:47 crc kubenswrapper[5043]: E1125 09:13:47.919735 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:13:48 crc kubenswrapper[5043]: I1125 09:13:48.931891 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:13:48 crc kubenswrapper[5043]: E1125 09:13:48.932635 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:14:02 crc kubenswrapper[5043]: I1125 09:14:02.962888 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:14:02 crc kubenswrapper[5043]: E1125 09:14:02.963989 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:14:13 crc kubenswrapper[5043]: I1125 09:14:13.962735 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:14:13 crc kubenswrapper[5043]: E1125 09:14:13.963371 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:14:24 crc kubenswrapper[5043]: I1125 09:14:24.963582 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:14:24 crc kubenswrapper[5043]: E1125 09:14:24.965179 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:14:39 crc kubenswrapper[5043]: I1125 09:14:39.963443 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:14:39 crc kubenswrapper[5043]: E1125 09:14:39.964780 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:14:53 crc kubenswrapper[5043]: I1125 09:14:53.962672 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:14:53 crc kubenswrapper[5043]: E1125 09:14:53.963384 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.152025 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9"] Nov 25 09:15:00 crc kubenswrapper[5043]: E1125 09:15:00.153137 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5572a51a-fe59-4bca-98bf-fc482e887979" containerName="extract-utilities" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.153158 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="5572a51a-fe59-4bca-98bf-fc482e887979" containerName="extract-utilities" Nov 25 09:15:00 crc kubenswrapper[5043]: E1125 09:15:00.153190 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5572a51a-fe59-4bca-98bf-fc482e887979" containerName="extract-content" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.153200 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="5572a51a-fe59-4bca-98bf-fc482e887979" containerName="extract-content" Nov 25 09:15:00 crc kubenswrapper[5043]: E1125 09:15:00.153225 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5572a51a-fe59-4bca-98bf-fc482e887979" containerName="registry-server" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.153234 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="5572a51a-fe59-4bca-98bf-fc482e887979" containerName="registry-server" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.153483 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="5572a51a-fe59-4bca-98bf-fc482e887979" containerName="registry-server" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.154379 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.158552 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.158917 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.170336 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9"] Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.352971 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-secret-volume\") pod \"collect-profiles-29401035-t97h9\" (UID: \"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.353165 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44z59\" (UniqueName: \"kubernetes.io/projected/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-kube-api-access-44z59\") pod \"collect-profiles-29401035-t97h9\" (UID: \"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.353199 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-config-volume\") pod \"collect-profiles-29401035-t97h9\" (UID: \"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.455007 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44z59\" (UniqueName: \"kubernetes.io/projected/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-kube-api-access-44z59\") pod \"collect-profiles-29401035-t97h9\" (UID: \"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.455076 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-config-volume\") pod \"collect-profiles-29401035-t97h9\" (UID: \"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.455153 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-secret-volume\") pod \"collect-profiles-29401035-t97h9\" (UID: \"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.456179 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-config-volume\") pod \"collect-profiles-29401035-t97h9\" (UID: \"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.470041 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-secret-volume\") pod \"collect-profiles-29401035-t97h9\" (UID: \"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.472382 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44z59\" (UniqueName: \"kubernetes.io/projected/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-kube-api-access-44z59\") pod \"collect-profiles-29401035-t97h9\" (UID: \"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.480371 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9" Nov 25 09:15:00 crc kubenswrapper[5043]: I1125 09:15:00.926418 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9"] Nov 25 09:15:01 crc kubenswrapper[5043]: I1125 09:15:01.642435 5043 generic.go:334] "Generic (PLEG): container finished" podID="2a7de80a-3c9f-4122-b4fc-8ef88d921fa8" containerID="d4b2157c4019b718249751677203c5f128d6373ede2e8bc4025d671b88c85146" exitCode=0 Nov 25 09:15:01 crc kubenswrapper[5043]: I1125 09:15:01.642550 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9" event={"ID":"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8","Type":"ContainerDied","Data":"d4b2157c4019b718249751677203c5f128d6373ede2e8bc4025d671b88c85146"} Nov 25 09:15:01 crc kubenswrapper[5043]: I1125 09:15:01.642761 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9" event={"ID":"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8","Type":"ContainerStarted","Data":"7688a1b792582666b4dfd4f6ac569b527e13b68f68c928f6e21ed0abd6fd656f"} Nov 25 09:15:03 crc kubenswrapper[5043]: I1125 09:15:03.071381 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9" Nov 25 09:15:03 crc kubenswrapper[5043]: I1125 09:15:03.206927 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-config-volume\") pod \"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8\" (UID: \"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8\") " Nov 25 09:15:03 crc kubenswrapper[5043]: I1125 09:15:03.207080 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-secret-volume\") pod \"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8\" (UID: \"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8\") " Nov 25 09:15:03 crc kubenswrapper[5043]: I1125 09:15:03.207144 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44z59\" (UniqueName: \"kubernetes.io/projected/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-kube-api-access-44z59\") pod \"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8\" (UID: \"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8\") " Nov 25 09:15:03 crc kubenswrapper[5043]: I1125 09:15:03.207468 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-config-volume" (OuterVolumeSpecName: "config-volume") pod "2a7de80a-3c9f-4122-b4fc-8ef88d921fa8" (UID: "2a7de80a-3c9f-4122-b4fc-8ef88d921fa8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:15:03 crc kubenswrapper[5043]: I1125 09:15:03.207978 5043 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:15:03 crc kubenswrapper[5043]: I1125 09:15:03.219767 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-kube-api-access-44z59" (OuterVolumeSpecName: "kube-api-access-44z59") pod "2a7de80a-3c9f-4122-b4fc-8ef88d921fa8" (UID: "2a7de80a-3c9f-4122-b4fc-8ef88d921fa8"). InnerVolumeSpecName "kube-api-access-44z59". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:15:03 crc kubenswrapper[5043]: I1125 09:15:03.220191 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2a7de80a-3c9f-4122-b4fc-8ef88d921fa8" (UID: "2a7de80a-3c9f-4122-b4fc-8ef88d921fa8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:15:03 crc kubenswrapper[5043]: I1125 09:15:03.309673 5043 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:15:03 crc kubenswrapper[5043]: I1125 09:15:03.309704 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44z59\" (UniqueName: \"kubernetes.io/projected/2a7de80a-3c9f-4122-b4fc-8ef88d921fa8-kube-api-access-44z59\") on node \"crc\" DevicePath \"\"" Nov 25 09:15:03 crc kubenswrapper[5043]: I1125 09:15:03.666295 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9" event={"ID":"2a7de80a-3c9f-4122-b4fc-8ef88d921fa8","Type":"ContainerDied","Data":"7688a1b792582666b4dfd4f6ac569b527e13b68f68c928f6e21ed0abd6fd656f"} Nov 25 09:15:03 crc kubenswrapper[5043]: I1125 09:15:03.666652 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7688a1b792582666b4dfd4f6ac569b527e13b68f68c928f6e21ed0abd6fd656f" Nov 25 09:15:03 crc kubenswrapper[5043]: I1125 09:15:03.666361 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401035-t97h9" Nov 25 09:15:04 crc kubenswrapper[5043]: I1125 09:15:04.160503 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv"] Nov 25 09:15:04 crc kubenswrapper[5043]: I1125 09:15:04.171009 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400990-r8rgv"] Nov 25 09:15:04 crc kubenswrapper[5043]: I1125 09:15:04.963866 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:15:04 crc kubenswrapper[5043]: E1125 09:15:04.964384 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:15:04 crc kubenswrapper[5043]: I1125 09:15:04.978226 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f959cf7-6cf4-490b-aa23-68a958edd787" path="/var/lib/kubelet/pods/8f959cf7-6cf4-490b-aa23-68a958edd787/volumes" Nov 25 09:15:16 crc kubenswrapper[5043]: I1125 09:15:16.974938 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:15:16 crc kubenswrapper[5043]: E1125 09:15:16.975698 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:15:27 crc kubenswrapper[5043]: I1125 09:15:27.962597 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:15:27 crc kubenswrapper[5043]: E1125 09:15:27.963508 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:15:41 crc kubenswrapper[5043]: I1125 09:15:41.962969 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:15:41 crc kubenswrapper[5043]: E1125 09:15:41.963812 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:15:52 crc kubenswrapper[5043]: I1125 09:15:52.962804 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:15:52 crc kubenswrapper[5043]: E1125 09:15:52.963461 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:15:55 crc kubenswrapper[5043]: I1125 09:15:55.164249 5043 scope.go:117] "RemoveContainer" containerID="e310f168050c717dcec285e9c0ce1d9c43a570faf19f739c4813efb7dc1e255c" Nov 25 09:16:04 crc kubenswrapper[5043]: I1125 09:16:04.965702 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:16:04 crc kubenswrapper[5043]: E1125 09:16:04.967020 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:16:18 crc kubenswrapper[5043]: I1125 09:16:18.963176 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:16:18 crc kubenswrapper[5043]: E1125 09:16:18.963881 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:16:27 crc kubenswrapper[5043]: I1125 09:16:27.869818 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sw6s2"] Nov 25 09:16:27 crc kubenswrapper[5043]: E1125 09:16:27.870716 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a7de80a-3c9f-4122-b4fc-8ef88d921fa8" containerName="collect-profiles" Nov 25 09:16:27 crc kubenswrapper[5043]: I1125 09:16:27.870729 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a7de80a-3c9f-4122-b4fc-8ef88d921fa8" containerName="collect-profiles" Nov 25 09:16:27 crc kubenswrapper[5043]: I1125 09:16:27.870959 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a7de80a-3c9f-4122-b4fc-8ef88d921fa8" containerName="collect-profiles" Nov 25 09:16:27 crc kubenswrapper[5043]: I1125 09:16:27.872312 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sw6s2" Nov 25 09:16:27 crc kubenswrapper[5043]: I1125 09:16:27.885526 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sw6s2"] Nov 25 09:16:27 crc kubenswrapper[5043]: I1125 09:16:27.913447 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b63c77-e084-4229-90a9-eb56ef109ae1-utilities\") pod \"redhat-operators-sw6s2\" (UID: \"e6b63c77-e084-4229-90a9-eb56ef109ae1\") " pod="openshift-marketplace/redhat-operators-sw6s2" Nov 25 09:16:27 crc kubenswrapper[5043]: I1125 09:16:27.913640 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b63c77-e084-4229-90a9-eb56ef109ae1-catalog-content\") pod \"redhat-operators-sw6s2\" (UID: \"e6b63c77-e084-4229-90a9-eb56ef109ae1\") " pod="openshift-marketplace/redhat-operators-sw6s2" Nov 25 09:16:27 crc kubenswrapper[5043]: I1125 09:16:27.913796 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l97ds\" (UniqueName: \"kubernetes.io/projected/e6b63c77-e084-4229-90a9-eb56ef109ae1-kube-api-access-l97ds\") pod \"redhat-operators-sw6s2\" (UID: \"e6b63c77-e084-4229-90a9-eb56ef109ae1\") " pod="openshift-marketplace/redhat-operators-sw6s2" Nov 25 09:16:28 crc kubenswrapper[5043]: I1125 09:16:28.015458 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b63c77-e084-4229-90a9-eb56ef109ae1-utilities\") pod \"redhat-operators-sw6s2\" (UID: \"e6b63c77-e084-4229-90a9-eb56ef109ae1\") " pod="openshift-marketplace/redhat-operators-sw6s2" Nov 25 09:16:28 crc kubenswrapper[5043]: I1125 09:16:28.015632 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b63c77-e084-4229-90a9-eb56ef109ae1-catalog-content\") pod \"redhat-operators-sw6s2\" (UID: \"e6b63c77-e084-4229-90a9-eb56ef109ae1\") " pod="openshift-marketplace/redhat-operators-sw6s2" Nov 25 09:16:28 crc kubenswrapper[5043]: I1125 09:16:28.015752 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l97ds\" (UniqueName: \"kubernetes.io/projected/e6b63c77-e084-4229-90a9-eb56ef109ae1-kube-api-access-l97ds\") pod \"redhat-operators-sw6s2\" (UID: \"e6b63c77-e084-4229-90a9-eb56ef109ae1\") " pod="openshift-marketplace/redhat-operators-sw6s2" Nov 25 09:16:28 crc kubenswrapper[5043]: I1125 09:16:28.016386 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b63c77-e084-4229-90a9-eb56ef109ae1-utilities\") pod \"redhat-operators-sw6s2\" (UID: \"e6b63c77-e084-4229-90a9-eb56ef109ae1\") " pod="openshift-marketplace/redhat-operators-sw6s2" Nov 25 09:16:28 crc kubenswrapper[5043]: I1125 09:16:28.016422 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b63c77-e084-4229-90a9-eb56ef109ae1-catalog-content\") pod \"redhat-operators-sw6s2\" (UID: \"e6b63c77-e084-4229-90a9-eb56ef109ae1\") " pod="openshift-marketplace/redhat-operators-sw6s2" Nov 25 09:16:28 crc kubenswrapper[5043]: I1125 09:16:28.049419 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l97ds\" (UniqueName: \"kubernetes.io/projected/e6b63c77-e084-4229-90a9-eb56ef109ae1-kube-api-access-l97ds\") pod \"redhat-operators-sw6s2\" (UID: \"e6b63c77-e084-4229-90a9-eb56ef109ae1\") " pod="openshift-marketplace/redhat-operators-sw6s2" Nov 25 09:16:28 crc kubenswrapper[5043]: I1125 09:16:28.240088 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sw6s2" Nov 25 09:16:28 crc kubenswrapper[5043]: I1125 09:16:28.739835 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sw6s2"] Nov 25 09:16:29 crc kubenswrapper[5043]: I1125 09:16:29.474858 5043 generic.go:334] "Generic (PLEG): container finished" podID="e6b63c77-e084-4229-90a9-eb56ef109ae1" containerID="1e338141dee82c7937db49aec5779ca2636326aedc2a151890e148ff7a09ff8b" exitCode=0 Nov 25 09:16:29 crc kubenswrapper[5043]: I1125 09:16:29.474933 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sw6s2" event={"ID":"e6b63c77-e084-4229-90a9-eb56ef109ae1","Type":"ContainerDied","Data":"1e338141dee82c7937db49aec5779ca2636326aedc2a151890e148ff7a09ff8b"} Nov 25 09:16:29 crc kubenswrapper[5043]: I1125 09:16:29.475192 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sw6s2" event={"ID":"e6b63c77-e084-4229-90a9-eb56ef109ae1","Type":"ContainerStarted","Data":"3f3b65e7501b88dad1a3107ffdd12f9363398072c124c44239452ee1e472b1e6"} Nov 25 09:16:29 crc kubenswrapper[5043]: I1125 09:16:29.963385 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:16:29 crc kubenswrapper[5043]: E1125 09:16:29.963804 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:16:30 crc kubenswrapper[5043]: I1125 09:16:30.484557 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sw6s2" event={"ID":"e6b63c77-e084-4229-90a9-eb56ef109ae1","Type":"ContainerStarted","Data":"f358205d717abdc5b0023b25c4082a2e92e92eb4d72f07d81767f3d804d3f555"} Nov 25 09:16:36 crc kubenswrapper[5043]: I1125 09:16:36.549339 5043 generic.go:334] "Generic (PLEG): container finished" podID="e6b63c77-e084-4229-90a9-eb56ef109ae1" containerID="f358205d717abdc5b0023b25c4082a2e92e92eb4d72f07d81767f3d804d3f555" exitCode=0 Nov 25 09:16:36 crc kubenswrapper[5043]: I1125 09:16:36.549391 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sw6s2" event={"ID":"e6b63c77-e084-4229-90a9-eb56ef109ae1","Type":"ContainerDied","Data":"f358205d717abdc5b0023b25c4082a2e92e92eb4d72f07d81767f3d804d3f555"} Nov 25 09:16:37 crc kubenswrapper[5043]: I1125 09:16:37.559860 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sw6s2" event={"ID":"e6b63c77-e084-4229-90a9-eb56ef109ae1","Type":"ContainerStarted","Data":"60368d47954116559fd194357f764a16e82e473ebdbed45eaa5e950c2b311f34"} Nov 25 09:16:37 crc kubenswrapper[5043]: I1125 09:16:37.585592 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sw6s2" podStartSLOduration=3.081808187 podStartE2EDuration="10.585572968s" podCreationTimestamp="2025-11-25 09:16:27 +0000 UTC" firstStartedPulling="2025-11-25 09:16:29.477504039 +0000 UTC m=+7253.645699760" lastFinishedPulling="2025-11-25 09:16:36.98126879 +0000 UTC m=+7261.149464541" observedRunningTime="2025-11-25 09:16:37.581623311 +0000 UTC m=+7261.749819062" watchObservedRunningTime="2025-11-25 09:16:37.585572968 +0000 UTC m=+7261.753768699" Nov 25 09:16:38 crc kubenswrapper[5043]: I1125 09:16:38.240957 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sw6s2" Nov 25 09:16:38 crc kubenswrapper[5043]: I1125 09:16:38.241152 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sw6s2" Nov 25 09:16:39 crc kubenswrapper[5043]: I1125 09:16:39.294968 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sw6s2" podUID="e6b63c77-e084-4229-90a9-eb56ef109ae1" containerName="registry-server" probeResult="failure" output=< Nov 25 09:16:39 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 09:16:39 crc kubenswrapper[5043]: > Nov 25 09:16:40 crc kubenswrapper[5043]: I1125 09:16:40.963262 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:16:40 crc kubenswrapper[5043]: E1125 09:16:40.963860 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:16:48 crc kubenswrapper[5043]: I1125 09:16:48.301240 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sw6s2" Nov 25 09:16:48 crc kubenswrapper[5043]: I1125 09:16:48.356758 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sw6s2" Nov 25 09:16:48 crc kubenswrapper[5043]: I1125 09:16:48.538272 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sw6s2"] Nov 25 09:16:49 crc kubenswrapper[5043]: I1125 09:16:49.668889 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sw6s2" podUID="e6b63c77-e084-4229-90a9-eb56ef109ae1" containerName="registry-server" containerID="cri-o://60368d47954116559fd194357f764a16e82e473ebdbed45eaa5e950c2b311f34" gracePeriod=2 Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.207482 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sw6s2" Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.263586 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l97ds\" (UniqueName: \"kubernetes.io/projected/e6b63c77-e084-4229-90a9-eb56ef109ae1-kube-api-access-l97ds\") pod \"e6b63c77-e084-4229-90a9-eb56ef109ae1\" (UID: \"e6b63c77-e084-4229-90a9-eb56ef109ae1\") " Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.263835 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b63c77-e084-4229-90a9-eb56ef109ae1-catalog-content\") pod \"e6b63c77-e084-4229-90a9-eb56ef109ae1\" (UID: \"e6b63c77-e084-4229-90a9-eb56ef109ae1\") " Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.263911 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b63c77-e084-4229-90a9-eb56ef109ae1-utilities\") pod \"e6b63c77-e084-4229-90a9-eb56ef109ae1\" (UID: \"e6b63c77-e084-4229-90a9-eb56ef109ae1\") " Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.264848 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6b63c77-e084-4229-90a9-eb56ef109ae1-utilities" (OuterVolumeSpecName: "utilities") pod "e6b63c77-e084-4229-90a9-eb56ef109ae1" (UID: "e6b63c77-e084-4229-90a9-eb56ef109ae1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.281712 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b63c77-e084-4229-90a9-eb56ef109ae1-kube-api-access-l97ds" (OuterVolumeSpecName: "kube-api-access-l97ds") pod "e6b63c77-e084-4229-90a9-eb56ef109ae1" (UID: "e6b63c77-e084-4229-90a9-eb56ef109ae1"). InnerVolumeSpecName "kube-api-access-l97ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.366308 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6b63c77-e084-4229-90a9-eb56ef109ae1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.366951 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l97ds\" (UniqueName: \"kubernetes.io/projected/e6b63c77-e084-4229-90a9-eb56ef109ae1-kube-api-access-l97ds\") on node \"crc\" DevicePath \"\"" Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.366897 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6b63c77-e084-4229-90a9-eb56ef109ae1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6b63c77-e084-4229-90a9-eb56ef109ae1" (UID: "e6b63c77-e084-4229-90a9-eb56ef109ae1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.469165 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6b63c77-e084-4229-90a9-eb56ef109ae1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.678778 5043 generic.go:334] "Generic (PLEG): container finished" podID="e6b63c77-e084-4229-90a9-eb56ef109ae1" containerID="60368d47954116559fd194357f764a16e82e473ebdbed45eaa5e950c2b311f34" exitCode=0 Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.678819 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sw6s2" event={"ID":"e6b63c77-e084-4229-90a9-eb56ef109ae1","Type":"ContainerDied","Data":"60368d47954116559fd194357f764a16e82e473ebdbed45eaa5e950c2b311f34"} Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.678827 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sw6s2" Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.678845 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sw6s2" event={"ID":"e6b63c77-e084-4229-90a9-eb56ef109ae1","Type":"ContainerDied","Data":"3f3b65e7501b88dad1a3107ffdd12f9363398072c124c44239452ee1e472b1e6"} Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.678862 5043 scope.go:117] "RemoveContainer" containerID="60368d47954116559fd194357f764a16e82e473ebdbed45eaa5e950c2b311f34" Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.709843 5043 scope.go:117] "RemoveContainer" containerID="f358205d717abdc5b0023b25c4082a2e92e92eb4d72f07d81767f3d804d3f555" Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.717758 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sw6s2"] Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.732817 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sw6s2"] Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.742882 5043 scope.go:117] "RemoveContainer" containerID="1e338141dee82c7937db49aec5779ca2636326aedc2a151890e148ff7a09ff8b" Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.780067 5043 scope.go:117] "RemoveContainer" containerID="60368d47954116559fd194357f764a16e82e473ebdbed45eaa5e950c2b311f34" Nov 25 09:16:50 crc kubenswrapper[5043]: E1125 09:16:50.782980 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60368d47954116559fd194357f764a16e82e473ebdbed45eaa5e950c2b311f34\": container with ID starting with 60368d47954116559fd194357f764a16e82e473ebdbed45eaa5e950c2b311f34 not found: ID does not exist" containerID="60368d47954116559fd194357f764a16e82e473ebdbed45eaa5e950c2b311f34" Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.783051 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60368d47954116559fd194357f764a16e82e473ebdbed45eaa5e950c2b311f34"} err="failed to get container status \"60368d47954116559fd194357f764a16e82e473ebdbed45eaa5e950c2b311f34\": rpc error: code = NotFound desc = could not find container \"60368d47954116559fd194357f764a16e82e473ebdbed45eaa5e950c2b311f34\": container with ID starting with 60368d47954116559fd194357f764a16e82e473ebdbed45eaa5e950c2b311f34 not found: ID does not exist" Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.783122 5043 scope.go:117] "RemoveContainer" containerID="f358205d717abdc5b0023b25c4082a2e92e92eb4d72f07d81767f3d804d3f555" Nov 25 09:16:50 crc kubenswrapper[5043]: E1125 09:16:50.784460 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f358205d717abdc5b0023b25c4082a2e92e92eb4d72f07d81767f3d804d3f555\": container with ID starting with f358205d717abdc5b0023b25c4082a2e92e92eb4d72f07d81767f3d804d3f555 not found: ID does not exist" containerID="f358205d717abdc5b0023b25c4082a2e92e92eb4d72f07d81767f3d804d3f555" Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.784494 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f358205d717abdc5b0023b25c4082a2e92e92eb4d72f07d81767f3d804d3f555"} err="failed to get container status \"f358205d717abdc5b0023b25c4082a2e92e92eb4d72f07d81767f3d804d3f555\": rpc error: code = NotFound desc = could not find container \"f358205d717abdc5b0023b25c4082a2e92e92eb4d72f07d81767f3d804d3f555\": container with ID starting with f358205d717abdc5b0023b25c4082a2e92e92eb4d72f07d81767f3d804d3f555 not found: ID does not exist" Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.784533 5043 scope.go:117] "RemoveContainer" containerID="1e338141dee82c7937db49aec5779ca2636326aedc2a151890e148ff7a09ff8b" Nov 25 09:16:50 crc kubenswrapper[5043]: E1125 09:16:50.785680 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e338141dee82c7937db49aec5779ca2636326aedc2a151890e148ff7a09ff8b\": container with ID starting with 1e338141dee82c7937db49aec5779ca2636326aedc2a151890e148ff7a09ff8b not found: ID does not exist" containerID="1e338141dee82c7937db49aec5779ca2636326aedc2a151890e148ff7a09ff8b" Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.785708 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e338141dee82c7937db49aec5779ca2636326aedc2a151890e148ff7a09ff8b"} err="failed to get container status \"1e338141dee82c7937db49aec5779ca2636326aedc2a151890e148ff7a09ff8b\": rpc error: code = NotFound desc = could not find container \"1e338141dee82c7937db49aec5779ca2636326aedc2a151890e148ff7a09ff8b\": container with ID starting with 1e338141dee82c7937db49aec5779ca2636326aedc2a151890e148ff7a09ff8b not found: ID does not exist" Nov 25 09:16:50 crc kubenswrapper[5043]: I1125 09:16:50.973203 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b63c77-e084-4229-90a9-eb56ef109ae1" path="/var/lib/kubelet/pods/e6b63c77-e084-4229-90a9-eb56ef109ae1/volumes" Nov 25 09:16:54 crc kubenswrapper[5043]: I1125 09:16:54.962924 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:16:54 crc kubenswrapper[5043]: E1125 09:16:54.963817 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:17:06 crc kubenswrapper[5043]: I1125 09:17:06.968116 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:17:06 crc kubenswrapper[5043]: E1125 09:17:06.968788 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:17:19 crc kubenswrapper[5043]: I1125 09:17:19.964926 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:17:19 crc kubenswrapper[5043]: E1125 09:17:19.965963 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:17:30 crc kubenswrapper[5043]: I1125 09:17:30.962793 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:17:30 crc kubenswrapper[5043]: E1125 09:17:30.963809 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:17:41 crc kubenswrapper[5043]: I1125 09:17:41.963049 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:17:41 crc kubenswrapper[5043]: E1125 09:17:41.964232 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:17:56 crc kubenswrapper[5043]: I1125 09:17:56.968808 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:17:56 crc kubenswrapper[5043]: E1125 09:17:56.969656 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:18:07 crc kubenswrapper[5043]: I1125 09:18:07.963135 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:18:07 crc kubenswrapper[5043]: E1125 09:18:07.963846 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:18:18 crc kubenswrapper[5043]: I1125 09:18:18.963072 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:18:18 crc kubenswrapper[5043]: E1125 09:18:18.963919 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:18:30 crc kubenswrapper[5043]: I1125 09:18:30.962489 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:18:30 crc kubenswrapper[5043]: E1125 09:18:30.963175 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:18:41 crc kubenswrapper[5043]: I1125 09:18:41.962679 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:18:41 crc kubenswrapper[5043]: E1125 09:18:41.963363 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:18:55 crc kubenswrapper[5043]: I1125 09:18:55.963320 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:18:56 crc kubenswrapper[5043]: I1125 09:18:56.834693 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"b0afa848975da0f97b5a2ea64e94d54fb56333cb06f89ce4e23e4ab51a6ca191"} Nov 25 09:19:29 crc kubenswrapper[5043]: I1125 09:19:29.548315 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cpx2w"] Nov 25 09:19:29 crc kubenswrapper[5043]: E1125 09:19:29.549330 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b63c77-e084-4229-90a9-eb56ef109ae1" containerName="extract-utilities" Nov 25 09:19:29 crc kubenswrapper[5043]: I1125 09:19:29.549348 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b63c77-e084-4229-90a9-eb56ef109ae1" containerName="extract-utilities" Nov 25 09:19:29 crc kubenswrapper[5043]: E1125 09:19:29.549386 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b63c77-e084-4229-90a9-eb56ef109ae1" containerName="extract-content" Nov 25 09:19:29 crc kubenswrapper[5043]: I1125 09:19:29.549394 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b63c77-e084-4229-90a9-eb56ef109ae1" containerName="extract-content" Nov 25 09:19:29 crc kubenswrapper[5043]: E1125 09:19:29.549416 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b63c77-e084-4229-90a9-eb56ef109ae1" containerName="registry-server" Nov 25 09:19:29 crc kubenswrapper[5043]: I1125 09:19:29.549425 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b63c77-e084-4229-90a9-eb56ef109ae1" containerName="registry-server" Nov 25 09:19:29 crc kubenswrapper[5043]: I1125 09:19:29.549680 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b63c77-e084-4229-90a9-eb56ef109ae1" containerName="registry-server" Nov 25 09:19:29 crc kubenswrapper[5043]: I1125 09:19:29.551755 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cpx2w" Nov 25 09:19:29 crc kubenswrapper[5043]: I1125 09:19:29.566270 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cpx2w"] Nov 25 09:19:29 crc kubenswrapper[5043]: I1125 09:19:29.656686 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78j4g\" (UniqueName: \"kubernetes.io/projected/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-kube-api-access-78j4g\") pod \"community-operators-cpx2w\" (UID: \"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1\") " pod="openshift-marketplace/community-operators-cpx2w" Nov 25 09:19:29 crc kubenswrapper[5043]: I1125 09:19:29.656861 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-catalog-content\") pod \"community-operators-cpx2w\" (UID: \"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1\") " pod="openshift-marketplace/community-operators-cpx2w" Nov 25 09:19:29 crc kubenswrapper[5043]: I1125 09:19:29.656963 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-utilities\") pod \"community-operators-cpx2w\" (UID: \"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1\") " pod="openshift-marketplace/community-operators-cpx2w" Nov 25 09:19:29 crc kubenswrapper[5043]: I1125 09:19:29.759085 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78j4g\" (UniqueName: \"kubernetes.io/projected/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-kube-api-access-78j4g\") pod \"community-operators-cpx2w\" (UID: \"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1\") " pod="openshift-marketplace/community-operators-cpx2w" Nov 25 09:19:29 crc kubenswrapper[5043]: I1125 09:19:29.759328 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-catalog-content\") pod \"community-operators-cpx2w\" (UID: \"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1\") " pod="openshift-marketplace/community-operators-cpx2w" Nov 25 09:19:29 crc kubenswrapper[5043]: I1125 09:19:29.759401 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-utilities\") pod \"community-operators-cpx2w\" (UID: \"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1\") " pod="openshift-marketplace/community-operators-cpx2w" Nov 25 09:19:29 crc kubenswrapper[5043]: I1125 09:19:29.759950 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-catalog-content\") pod \"community-operators-cpx2w\" (UID: \"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1\") " pod="openshift-marketplace/community-operators-cpx2w" Nov 25 09:19:29 crc kubenswrapper[5043]: I1125 09:19:29.760041 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-utilities\") pod \"community-operators-cpx2w\" (UID: \"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1\") " pod="openshift-marketplace/community-operators-cpx2w" Nov 25 09:19:29 crc kubenswrapper[5043]: I1125 09:19:29.784625 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78j4g\" (UniqueName: \"kubernetes.io/projected/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-kube-api-access-78j4g\") pod \"community-operators-cpx2w\" (UID: \"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1\") " pod="openshift-marketplace/community-operators-cpx2w" Nov 25 09:19:29 crc kubenswrapper[5043]: I1125 09:19:29.881867 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cpx2w" Nov 25 09:19:30 crc kubenswrapper[5043]: I1125 09:19:30.438568 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cpx2w"] Nov 25 09:19:31 crc kubenswrapper[5043]: I1125 09:19:31.184123 5043 generic.go:334] "Generic (PLEG): container finished" podID="ad5039f5-593b-4e1b-b92c-88afeaa8a8c1" containerID="38b7fa26cc5c3b3d430582525ab288e2d1ecf160f45bf219134e433001e6be52" exitCode=0 Nov 25 09:19:31 crc kubenswrapper[5043]: I1125 09:19:31.184294 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpx2w" event={"ID":"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1","Type":"ContainerDied","Data":"38b7fa26cc5c3b3d430582525ab288e2d1ecf160f45bf219134e433001e6be52"} Nov 25 09:19:31 crc kubenswrapper[5043]: I1125 09:19:31.185397 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpx2w" event={"ID":"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1","Type":"ContainerStarted","Data":"4e32afe16650c029560025953652d7a19415a56628a796c11caaaa0980b9bd49"} Nov 25 09:19:31 crc kubenswrapper[5043]: I1125 09:19:31.186779 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 09:19:32 crc kubenswrapper[5043]: I1125 09:19:32.194737 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpx2w" event={"ID":"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1","Type":"ContainerStarted","Data":"ba59eaf11d421c3012decffa292307983294d49e945aa86919ba464e9af6901c"} Nov 25 09:19:33 crc kubenswrapper[5043]: I1125 09:19:33.204876 5043 generic.go:334] "Generic (PLEG): container finished" podID="ad5039f5-593b-4e1b-b92c-88afeaa8a8c1" containerID="ba59eaf11d421c3012decffa292307983294d49e945aa86919ba464e9af6901c" exitCode=0 Nov 25 09:19:33 crc kubenswrapper[5043]: I1125 09:19:33.205017 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpx2w" event={"ID":"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1","Type":"ContainerDied","Data":"ba59eaf11d421c3012decffa292307983294d49e945aa86919ba464e9af6901c"} Nov 25 09:19:34 crc kubenswrapper[5043]: I1125 09:19:34.222106 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpx2w" event={"ID":"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1","Type":"ContainerStarted","Data":"5ed983d5d95023f908d78fabbbd1ce39427879a6f955c2404136fa3f594ca435"} Nov 25 09:19:34 crc kubenswrapper[5043]: I1125 09:19:34.252790 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cpx2w" podStartSLOduration=2.78808523 podStartE2EDuration="5.252753579s" podCreationTimestamp="2025-11-25 09:19:29 +0000 UTC" firstStartedPulling="2025-11-25 09:19:31.186478506 +0000 UTC m=+7435.354674237" lastFinishedPulling="2025-11-25 09:19:33.651146855 +0000 UTC m=+7437.819342586" observedRunningTime="2025-11-25 09:19:34.243495189 +0000 UTC m=+7438.411690900" watchObservedRunningTime="2025-11-25 09:19:34.252753579 +0000 UTC m=+7438.420949300" Nov 25 09:19:37 crc kubenswrapper[5043]: I1125 09:19:37.203114 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ctsg6"] Nov 25 09:19:37 crc kubenswrapper[5043]: I1125 09:19:37.211787 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctsg6" Nov 25 09:19:37 crc kubenswrapper[5043]: I1125 09:19:37.232419 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctsg6"] Nov 25 09:19:37 crc kubenswrapper[5043]: I1125 09:19:37.314483 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e28a808-9ff9-4071-bc1a-29bfff434465-utilities\") pod \"redhat-marketplace-ctsg6\" (UID: \"4e28a808-9ff9-4071-bc1a-29bfff434465\") " pod="openshift-marketplace/redhat-marketplace-ctsg6" Nov 25 09:19:37 crc kubenswrapper[5043]: I1125 09:19:37.314908 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e28a808-9ff9-4071-bc1a-29bfff434465-catalog-content\") pod \"redhat-marketplace-ctsg6\" (UID: \"4e28a808-9ff9-4071-bc1a-29bfff434465\") " pod="openshift-marketplace/redhat-marketplace-ctsg6" Nov 25 09:19:37 crc kubenswrapper[5043]: I1125 09:19:37.314953 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzgfl\" (UniqueName: \"kubernetes.io/projected/4e28a808-9ff9-4071-bc1a-29bfff434465-kube-api-access-mzgfl\") pod \"redhat-marketplace-ctsg6\" (UID: \"4e28a808-9ff9-4071-bc1a-29bfff434465\") " pod="openshift-marketplace/redhat-marketplace-ctsg6" Nov 25 09:19:37 crc kubenswrapper[5043]: I1125 09:19:37.417460 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e28a808-9ff9-4071-bc1a-29bfff434465-utilities\") pod \"redhat-marketplace-ctsg6\" (UID: \"4e28a808-9ff9-4071-bc1a-29bfff434465\") " pod="openshift-marketplace/redhat-marketplace-ctsg6" Nov 25 09:19:37 crc kubenswrapper[5043]: I1125 09:19:37.417551 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e28a808-9ff9-4071-bc1a-29bfff434465-catalog-content\") pod \"redhat-marketplace-ctsg6\" (UID: \"4e28a808-9ff9-4071-bc1a-29bfff434465\") " pod="openshift-marketplace/redhat-marketplace-ctsg6" Nov 25 09:19:37 crc kubenswrapper[5043]: I1125 09:19:37.417591 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzgfl\" (UniqueName: \"kubernetes.io/projected/4e28a808-9ff9-4071-bc1a-29bfff434465-kube-api-access-mzgfl\") pod \"redhat-marketplace-ctsg6\" (UID: \"4e28a808-9ff9-4071-bc1a-29bfff434465\") " pod="openshift-marketplace/redhat-marketplace-ctsg6" Nov 25 09:19:37 crc kubenswrapper[5043]: I1125 09:19:37.418231 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e28a808-9ff9-4071-bc1a-29bfff434465-catalog-content\") pod \"redhat-marketplace-ctsg6\" (UID: \"4e28a808-9ff9-4071-bc1a-29bfff434465\") " pod="openshift-marketplace/redhat-marketplace-ctsg6" Nov 25 09:19:37 crc kubenswrapper[5043]: I1125 09:19:37.418294 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e28a808-9ff9-4071-bc1a-29bfff434465-utilities\") pod \"redhat-marketplace-ctsg6\" (UID: \"4e28a808-9ff9-4071-bc1a-29bfff434465\") " pod="openshift-marketplace/redhat-marketplace-ctsg6" Nov 25 09:19:37 crc kubenswrapper[5043]: I1125 09:19:37.447045 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzgfl\" (UniqueName: \"kubernetes.io/projected/4e28a808-9ff9-4071-bc1a-29bfff434465-kube-api-access-mzgfl\") pod \"redhat-marketplace-ctsg6\" (UID: \"4e28a808-9ff9-4071-bc1a-29bfff434465\") " pod="openshift-marketplace/redhat-marketplace-ctsg6" Nov 25 09:19:37 crc kubenswrapper[5043]: I1125 09:19:37.543156 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctsg6" Nov 25 09:19:38 crc kubenswrapper[5043]: I1125 09:19:38.068833 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctsg6"] Nov 25 09:19:38 crc kubenswrapper[5043]: I1125 09:19:38.257915 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctsg6" event={"ID":"4e28a808-9ff9-4071-bc1a-29bfff434465","Type":"ContainerStarted","Data":"e7ff4c00c63c8cc71f15069c1de18cd95f8ac404a58796d11a0b63f59dc4f191"} Nov 25 09:19:39 crc kubenswrapper[5043]: I1125 09:19:39.270171 5043 generic.go:334] "Generic (PLEG): container finished" podID="4e28a808-9ff9-4071-bc1a-29bfff434465" containerID="c515b63d83ae71b6c33b4581ecce69d8d76c9da39429fc3029f64311b20af216" exitCode=0 Nov 25 09:19:39 crc kubenswrapper[5043]: I1125 09:19:39.270317 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctsg6" event={"ID":"4e28a808-9ff9-4071-bc1a-29bfff434465","Type":"ContainerDied","Data":"c515b63d83ae71b6c33b4581ecce69d8d76c9da39429fc3029f64311b20af216"} Nov 25 09:19:39 crc kubenswrapper[5043]: I1125 09:19:39.883396 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cpx2w" Nov 25 09:19:39 crc kubenswrapper[5043]: I1125 09:19:39.883886 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cpx2w" Nov 25 09:19:39 crc kubenswrapper[5043]: I1125 09:19:39.961272 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cpx2w" Nov 25 09:19:40 crc kubenswrapper[5043]: I1125 09:19:40.283339 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctsg6" event={"ID":"4e28a808-9ff9-4071-bc1a-29bfff434465","Type":"ContainerStarted","Data":"1425082bb450d18b6e03aecf2b9a050d770aa2e33b264698ad67917b4629d47f"} Nov 25 09:19:40 crc kubenswrapper[5043]: I1125 09:19:40.338394 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cpx2w" Nov 25 09:19:41 crc kubenswrapper[5043]: I1125 09:19:41.296067 5043 generic.go:334] "Generic (PLEG): container finished" podID="4e28a808-9ff9-4071-bc1a-29bfff434465" containerID="1425082bb450d18b6e03aecf2b9a050d770aa2e33b264698ad67917b4629d47f" exitCode=0 Nov 25 09:19:41 crc kubenswrapper[5043]: I1125 09:19:41.296227 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctsg6" event={"ID":"4e28a808-9ff9-4071-bc1a-29bfff434465","Type":"ContainerDied","Data":"1425082bb450d18b6e03aecf2b9a050d770aa2e33b264698ad67917b4629d47f"} Nov 25 09:19:42 crc kubenswrapper[5043]: I1125 09:19:42.307630 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctsg6" event={"ID":"4e28a808-9ff9-4071-bc1a-29bfff434465","Type":"ContainerStarted","Data":"2541fe7063557920a9e27035c0f5d7cda669ba71161f080f2af5aac2c94e035d"} Nov 25 09:19:42 crc kubenswrapper[5043]: I1125 09:19:42.325193 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cpx2w"] Nov 25 09:19:42 crc kubenswrapper[5043]: I1125 09:19:42.325468 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cpx2w" podUID="ad5039f5-593b-4e1b-b92c-88afeaa8a8c1" containerName="registry-server" containerID="cri-o://5ed983d5d95023f908d78fabbbd1ce39427879a6f955c2404136fa3f594ca435" gracePeriod=2 Nov 25 09:19:42 crc kubenswrapper[5043]: I1125 09:19:42.330929 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ctsg6" podStartSLOduration=2.840419462 podStartE2EDuration="5.330903599s" podCreationTimestamp="2025-11-25 09:19:37 +0000 UTC" firstStartedPulling="2025-11-25 09:19:39.272581411 +0000 UTC m=+7443.440777152" lastFinishedPulling="2025-11-25 09:19:41.763065568 +0000 UTC m=+7445.931261289" observedRunningTime="2025-11-25 09:19:42.328729631 +0000 UTC m=+7446.496925362" watchObservedRunningTime="2025-11-25 09:19:42.330903599 +0000 UTC m=+7446.499099320" Nov 25 09:19:42 crc kubenswrapper[5043]: I1125 09:19:42.991110 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cpx2w" Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.146253 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-utilities\") pod \"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1\" (UID: \"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1\") " Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.146440 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78j4g\" (UniqueName: \"kubernetes.io/projected/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-kube-api-access-78j4g\") pod \"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1\" (UID: \"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1\") " Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.146478 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-catalog-content\") pod \"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1\" (UID: \"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1\") " Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.147111 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-utilities" (OuterVolumeSpecName: "utilities") pod "ad5039f5-593b-4e1b-b92c-88afeaa8a8c1" (UID: "ad5039f5-593b-4e1b-b92c-88afeaa8a8c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.148800 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.153792 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-kube-api-access-78j4g" (OuterVolumeSpecName: "kube-api-access-78j4g") pod "ad5039f5-593b-4e1b-b92c-88afeaa8a8c1" (UID: "ad5039f5-593b-4e1b-b92c-88afeaa8a8c1"). InnerVolumeSpecName "kube-api-access-78j4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.250862 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78j4g\" (UniqueName: \"kubernetes.io/projected/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-kube-api-access-78j4g\") on node \"crc\" DevicePath \"\"" Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.320134 5043 generic.go:334] "Generic (PLEG): container finished" podID="ad5039f5-593b-4e1b-b92c-88afeaa8a8c1" containerID="5ed983d5d95023f908d78fabbbd1ce39427879a6f955c2404136fa3f594ca435" exitCode=0 Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.320184 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cpx2w" Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.320206 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpx2w" event={"ID":"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1","Type":"ContainerDied","Data":"5ed983d5d95023f908d78fabbbd1ce39427879a6f955c2404136fa3f594ca435"} Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.320235 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpx2w" event={"ID":"ad5039f5-593b-4e1b-b92c-88afeaa8a8c1","Type":"ContainerDied","Data":"4e32afe16650c029560025953652d7a19415a56628a796c11caaaa0980b9bd49"} Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.320252 5043 scope.go:117] "RemoveContainer" containerID="5ed983d5d95023f908d78fabbbd1ce39427879a6f955c2404136fa3f594ca435" Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.339407 5043 scope.go:117] "RemoveContainer" containerID="ba59eaf11d421c3012decffa292307983294d49e945aa86919ba464e9af6901c" Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.361767 5043 scope.go:117] "RemoveContainer" containerID="38b7fa26cc5c3b3d430582525ab288e2d1ecf160f45bf219134e433001e6be52" Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.403315 5043 scope.go:117] "RemoveContainer" containerID="5ed983d5d95023f908d78fabbbd1ce39427879a6f955c2404136fa3f594ca435" Nov 25 09:19:43 crc kubenswrapper[5043]: E1125 09:19:43.403935 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ed983d5d95023f908d78fabbbd1ce39427879a6f955c2404136fa3f594ca435\": container with ID starting with 5ed983d5d95023f908d78fabbbd1ce39427879a6f955c2404136fa3f594ca435 not found: ID does not exist" containerID="5ed983d5d95023f908d78fabbbd1ce39427879a6f955c2404136fa3f594ca435" Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.403969 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ed983d5d95023f908d78fabbbd1ce39427879a6f955c2404136fa3f594ca435"} err="failed to get container status \"5ed983d5d95023f908d78fabbbd1ce39427879a6f955c2404136fa3f594ca435\": rpc error: code = NotFound desc = could not find container \"5ed983d5d95023f908d78fabbbd1ce39427879a6f955c2404136fa3f594ca435\": container with ID starting with 5ed983d5d95023f908d78fabbbd1ce39427879a6f955c2404136fa3f594ca435 not found: ID does not exist" Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.403991 5043 scope.go:117] "RemoveContainer" containerID="ba59eaf11d421c3012decffa292307983294d49e945aa86919ba464e9af6901c" Nov 25 09:19:43 crc kubenswrapper[5043]: E1125 09:19:43.404195 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba59eaf11d421c3012decffa292307983294d49e945aa86919ba464e9af6901c\": container with ID starting with ba59eaf11d421c3012decffa292307983294d49e945aa86919ba464e9af6901c not found: ID does not exist" containerID="ba59eaf11d421c3012decffa292307983294d49e945aa86919ba464e9af6901c" Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.404221 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba59eaf11d421c3012decffa292307983294d49e945aa86919ba464e9af6901c"} err="failed to get container status \"ba59eaf11d421c3012decffa292307983294d49e945aa86919ba464e9af6901c\": rpc error: code = NotFound desc = could not find container \"ba59eaf11d421c3012decffa292307983294d49e945aa86919ba464e9af6901c\": container with ID starting with ba59eaf11d421c3012decffa292307983294d49e945aa86919ba464e9af6901c not found: ID does not exist" Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.404236 5043 scope.go:117] "RemoveContainer" containerID="38b7fa26cc5c3b3d430582525ab288e2d1ecf160f45bf219134e433001e6be52" Nov 25 09:19:43 crc kubenswrapper[5043]: E1125 09:19:43.404726 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38b7fa26cc5c3b3d430582525ab288e2d1ecf160f45bf219134e433001e6be52\": container with ID starting with 38b7fa26cc5c3b3d430582525ab288e2d1ecf160f45bf219134e433001e6be52 not found: ID does not exist" containerID="38b7fa26cc5c3b3d430582525ab288e2d1ecf160f45bf219134e433001e6be52" Nov 25 09:19:43 crc kubenswrapper[5043]: I1125 09:19:43.404748 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38b7fa26cc5c3b3d430582525ab288e2d1ecf160f45bf219134e433001e6be52"} err="failed to get container status \"38b7fa26cc5c3b3d430582525ab288e2d1ecf160f45bf219134e433001e6be52\": rpc error: code = NotFound desc = could not find container \"38b7fa26cc5c3b3d430582525ab288e2d1ecf160f45bf219134e433001e6be52\": container with ID starting with 38b7fa26cc5c3b3d430582525ab288e2d1ecf160f45bf219134e433001e6be52 not found: ID does not exist" Nov 25 09:19:44 crc kubenswrapper[5043]: I1125 09:19:44.046020 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad5039f5-593b-4e1b-b92c-88afeaa8a8c1" (UID: "ad5039f5-593b-4e1b-b92c-88afeaa8a8c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:19:44 crc kubenswrapper[5043]: I1125 09:19:44.068772 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:19:44 crc kubenswrapper[5043]: I1125 09:19:44.270465 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cpx2w"] Nov 25 09:19:44 crc kubenswrapper[5043]: I1125 09:19:44.291570 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cpx2w"] Nov 25 09:19:44 crc kubenswrapper[5043]: I1125 09:19:44.976888 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad5039f5-593b-4e1b-b92c-88afeaa8a8c1" path="/var/lib/kubelet/pods/ad5039f5-593b-4e1b-b92c-88afeaa8a8c1/volumes" Nov 25 09:19:47 crc kubenswrapper[5043]: I1125 09:19:47.543891 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ctsg6" Nov 25 09:19:47 crc kubenswrapper[5043]: I1125 09:19:47.544224 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ctsg6" Nov 25 09:19:47 crc kubenswrapper[5043]: I1125 09:19:47.595648 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ctsg6" Nov 25 09:19:48 crc kubenswrapper[5043]: I1125 09:19:48.438299 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ctsg6" Nov 25 09:19:48 crc kubenswrapper[5043]: I1125 09:19:48.492765 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctsg6"] Nov 25 09:19:50 crc kubenswrapper[5043]: I1125 09:19:50.408410 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ctsg6" podUID="4e28a808-9ff9-4071-bc1a-29bfff434465" containerName="registry-server" containerID="cri-o://2541fe7063557920a9e27035c0f5d7cda669ba71161f080f2af5aac2c94e035d" gracePeriod=2 Nov 25 09:19:51 crc kubenswrapper[5043]: I1125 09:19:51.417882 5043 generic.go:334] "Generic (PLEG): container finished" podID="4e28a808-9ff9-4071-bc1a-29bfff434465" containerID="2541fe7063557920a9e27035c0f5d7cda669ba71161f080f2af5aac2c94e035d" exitCode=0 Nov 25 09:19:51 crc kubenswrapper[5043]: I1125 09:19:51.417967 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctsg6" event={"ID":"4e28a808-9ff9-4071-bc1a-29bfff434465","Type":"ContainerDied","Data":"2541fe7063557920a9e27035c0f5d7cda669ba71161f080f2af5aac2c94e035d"} Nov 25 09:19:51 crc kubenswrapper[5043]: I1125 09:19:51.419479 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ctsg6" event={"ID":"4e28a808-9ff9-4071-bc1a-29bfff434465","Type":"ContainerDied","Data":"e7ff4c00c63c8cc71f15069c1de18cd95f8ac404a58796d11a0b63f59dc4f191"} Nov 25 09:19:51 crc kubenswrapper[5043]: I1125 09:19:51.419557 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7ff4c00c63c8cc71f15069c1de18cd95f8ac404a58796d11a0b63f59dc4f191" Nov 25 09:19:51 crc kubenswrapper[5043]: I1125 09:19:51.419332 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctsg6" Nov 25 09:19:51 crc kubenswrapper[5043]: I1125 09:19:51.535326 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e28a808-9ff9-4071-bc1a-29bfff434465-catalog-content\") pod \"4e28a808-9ff9-4071-bc1a-29bfff434465\" (UID: \"4e28a808-9ff9-4071-bc1a-29bfff434465\") " Nov 25 09:19:51 crc kubenswrapper[5043]: I1125 09:19:51.535717 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzgfl\" (UniqueName: \"kubernetes.io/projected/4e28a808-9ff9-4071-bc1a-29bfff434465-kube-api-access-mzgfl\") pod \"4e28a808-9ff9-4071-bc1a-29bfff434465\" (UID: \"4e28a808-9ff9-4071-bc1a-29bfff434465\") " Nov 25 09:19:51 crc kubenswrapper[5043]: I1125 09:19:51.536034 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e28a808-9ff9-4071-bc1a-29bfff434465-utilities\") pod \"4e28a808-9ff9-4071-bc1a-29bfff434465\" (UID: \"4e28a808-9ff9-4071-bc1a-29bfff434465\") " Nov 25 09:19:51 crc kubenswrapper[5043]: I1125 09:19:51.537739 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e28a808-9ff9-4071-bc1a-29bfff434465-utilities" (OuterVolumeSpecName: "utilities") pod "4e28a808-9ff9-4071-bc1a-29bfff434465" (UID: "4e28a808-9ff9-4071-bc1a-29bfff434465"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:19:51 crc kubenswrapper[5043]: I1125 09:19:51.549902 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e28a808-9ff9-4071-bc1a-29bfff434465-kube-api-access-mzgfl" (OuterVolumeSpecName: "kube-api-access-mzgfl") pod "4e28a808-9ff9-4071-bc1a-29bfff434465" (UID: "4e28a808-9ff9-4071-bc1a-29bfff434465"). InnerVolumeSpecName "kube-api-access-mzgfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:19:51 crc kubenswrapper[5043]: I1125 09:19:51.561493 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e28a808-9ff9-4071-bc1a-29bfff434465-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e28a808-9ff9-4071-bc1a-29bfff434465" (UID: "4e28a808-9ff9-4071-bc1a-29bfff434465"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:19:51 crc kubenswrapper[5043]: I1125 09:19:51.639462 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e28a808-9ff9-4071-bc1a-29bfff434465-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:19:51 crc kubenswrapper[5043]: I1125 09:19:51.639505 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzgfl\" (UniqueName: \"kubernetes.io/projected/4e28a808-9ff9-4071-bc1a-29bfff434465-kube-api-access-mzgfl\") on node \"crc\" DevicePath \"\"" Nov 25 09:19:51 crc kubenswrapper[5043]: I1125 09:19:51.639521 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e28a808-9ff9-4071-bc1a-29bfff434465-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:19:52 crc kubenswrapper[5043]: I1125 09:19:52.428512 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ctsg6" Nov 25 09:19:52 crc kubenswrapper[5043]: I1125 09:19:52.472575 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctsg6"] Nov 25 09:19:52 crc kubenswrapper[5043]: I1125 09:19:52.487259 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ctsg6"] Nov 25 09:19:52 crc kubenswrapper[5043]: I1125 09:19:52.972926 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e28a808-9ff9-4071-bc1a-29bfff434465" path="/var/lib/kubelet/pods/4e28a808-9ff9-4071-bc1a-29bfff434465/volumes" Nov 25 09:21:17 crc kubenswrapper[5043]: I1125 09:21:17.278972 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:21:17 crc kubenswrapper[5043]: I1125 09:21:17.280492 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:21:39 crc kubenswrapper[5043]: I1125 09:21:39.884029 5043 generic.go:334] "Generic (PLEG): container finished" podID="6515f5fe-fd1f-4786-8374-8af7b394831b" containerID="efa09f80e8127de221ce9786abd45bde5663f0d3cebb88b5d8f391910882cc7e" exitCode=0 Nov 25 09:21:39 crc kubenswrapper[5043]: I1125 09:21:39.884189 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"6515f5fe-fd1f-4786-8374-8af7b394831b","Type":"ContainerDied","Data":"efa09f80e8127de221ce9786abd45bde5663f0d3cebb88b5d8f391910882cc7e"} Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.727534 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.809148 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6515f5fe-fd1f-4786-8374-8af7b394831b-test-operator-ephemeral-temporary\") pod \"6515f5fe-fd1f-4786-8374-8af7b394831b\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.809231 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"6515f5fe-fd1f-4786-8374-8af7b394831b\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.809269 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6515f5fe-fd1f-4786-8374-8af7b394831b-test-operator-ephemeral-workdir\") pod \"6515f5fe-fd1f-4786-8374-8af7b394831b\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.809393 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6515f5fe-fd1f-4786-8374-8af7b394831b-config-data\") pod \"6515f5fe-fd1f-4786-8374-8af7b394831b\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.809415 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ssh-key\") pod \"6515f5fe-fd1f-4786-8374-8af7b394831b\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.809440 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ceph\") pod \"6515f5fe-fd1f-4786-8374-8af7b394831b\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.809563 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ca-certs\") pod \"6515f5fe-fd1f-4786-8374-8af7b394831b\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.809625 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-openstack-config-secret\") pod \"6515f5fe-fd1f-4786-8374-8af7b394831b\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.809649 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6515f5fe-fd1f-4786-8374-8af7b394831b-openstack-config\") pod \"6515f5fe-fd1f-4786-8374-8af7b394831b\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.809672 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxwrw\" (UniqueName: \"kubernetes.io/projected/6515f5fe-fd1f-4786-8374-8af7b394831b-kube-api-access-nxwrw\") pod \"6515f5fe-fd1f-4786-8374-8af7b394831b\" (UID: \"6515f5fe-fd1f-4786-8374-8af7b394831b\") " Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.811367 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6515f5fe-fd1f-4786-8374-8af7b394831b-config-data" (OuterVolumeSpecName: "config-data") pod "6515f5fe-fd1f-4786-8374-8af7b394831b" (UID: "6515f5fe-fd1f-4786-8374-8af7b394831b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.811997 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6515f5fe-fd1f-4786-8374-8af7b394831b-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "6515f5fe-fd1f-4786-8374-8af7b394831b" (UID: "6515f5fe-fd1f-4786-8374-8af7b394831b"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.817995 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6515f5fe-fd1f-4786-8374-8af7b394831b-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "6515f5fe-fd1f-4786-8374-8af7b394831b" (UID: "6515f5fe-fd1f-4786-8374-8af7b394831b"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.818887 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ceph" (OuterVolumeSpecName: "ceph") pod "6515f5fe-fd1f-4786-8374-8af7b394831b" (UID: "6515f5fe-fd1f-4786-8374-8af7b394831b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.818889 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "6515f5fe-fd1f-4786-8374-8af7b394831b" (UID: "6515f5fe-fd1f-4786-8374-8af7b394831b"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.820131 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6515f5fe-fd1f-4786-8374-8af7b394831b-kube-api-access-nxwrw" (OuterVolumeSpecName: "kube-api-access-nxwrw") pod "6515f5fe-fd1f-4786-8374-8af7b394831b" (UID: "6515f5fe-fd1f-4786-8374-8af7b394831b"). InnerVolumeSpecName "kube-api-access-nxwrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.840807 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "6515f5fe-fd1f-4786-8374-8af7b394831b" (UID: "6515f5fe-fd1f-4786-8374-8af7b394831b"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.842620 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6515f5fe-fd1f-4786-8374-8af7b394831b" (UID: "6515f5fe-fd1f-4786-8374-8af7b394831b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.844200 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6515f5fe-fd1f-4786-8374-8af7b394831b" (UID: "6515f5fe-fd1f-4786-8374-8af7b394831b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.876366 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6515f5fe-fd1f-4786-8374-8af7b394831b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6515f5fe-fd1f-4786-8374-8af7b394831b" (UID: "6515f5fe-fd1f-4786-8374-8af7b394831b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.912583 5043 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.912641 5043 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.912657 5043 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6515f5fe-fd1f-4786-8374-8af7b394831b-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.912669 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxwrw\" (UniqueName: \"kubernetes.io/projected/6515f5fe-fd1f-4786-8374-8af7b394831b-kube-api-access-nxwrw\") on node \"crc\" DevicePath \"\"" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.912681 5043 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6515f5fe-fd1f-4786-8374-8af7b394831b-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.912719 5043 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.912736 5043 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6515f5fe-fd1f-4786-8374-8af7b394831b-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.912748 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6515f5fe-fd1f-4786-8374-8af7b394831b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.912759 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.912770 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6515f5fe-fd1f-4786-8374-8af7b394831b-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.918043 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"6515f5fe-fd1f-4786-8374-8af7b394831b","Type":"ContainerDied","Data":"f1882dafe9f985ed07a09d6a7656e259f7269b73b1fb09e38994b7277a5d93a4"} Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.918096 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1882dafe9f985ed07a09d6a7656e259f7269b73b1fb09e38994b7277a5d93a4" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.918280 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Nov 25 09:21:41 crc kubenswrapper[5043]: I1125 09:21:41.942204 5043 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.006174 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Nov 25 09:21:42 crc kubenswrapper[5043]: E1125 09:21:42.006798 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e28a808-9ff9-4071-bc1a-29bfff434465" containerName="extract-content" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.006812 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e28a808-9ff9-4071-bc1a-29bfff434465" containerName="extract-content" Nov 25 09:21:42 crc kubenswrapper[5043]: E1125 09:21:42.006834 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad5039f5-593b-4e1b-b92c-88afeaa8a8c1" containerName="extract-utilities" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.006860 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad5039f5-593b-4e1b-b92c-88afeaa8a8c1" containerName="extract-utilities" Nov 25 09:21:42 crc kubenswrapper[5043]: E1125 09:21:42.006878 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad5039f5-593b-4e1b-b92c-88afeaa8a8c1" containerName="registry-server" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.006884 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad5039f5-593b-4e1b-b92c-88afeaa8a8c1" containerName="registry-server" Nov 25 09:21:42 crc kubenswrapper[5043]: E1125 09:21:42.006899 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6515f5fe-fd1f-4786-8374-8af7b394831b" containerName="tempest-tests-tempest-tests-runner" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.006905 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="6515f5fe-fd1f-4786-8374-8af7b394831b" containerName="tempest-tests-tempest-tests-runner" Nov 25 09:21:42 crc kubenswrapper[5043]: E1125 09:21:42.006946 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad5039f5-593b-4e1b-b92c-88afeaa8a8c1" containerName="extract-content" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.006953 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad5039f5-593b-4e1b-b92c-88afeaa8a8c1" containerName="extract-content" Nov 25 09:21:42 crc kubenswrapper[5043]: E1125 09:21:42.006966 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e28a808-9ff9-4071-bc1a-29bfff434465" containerName="registry-server" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.006973 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e28a808-9ff9-4071-bc1a-29bfff434465" containerName="registry-server" Nov 25 09:21:42 crc kubenswrapper[5043]: E1125 09:21:42.006988 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e28a808-9ff9-4071-bc1a-29bfff434465" containerName="extract-utilities" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.006996 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e28a808-9ff9-4071-bc1a-29bfff434465" containerName="extract-utilities" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.007241 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="6515f5fe-fd1f-4786-8374-8af7b394831b" containerName="tempest-tests-tempest-tests-runner" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.007274 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e28a808-9ff9-4071-bc1a-29bfff434465" containerName="registry-server" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.007281 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad5039f5-593b-4e1b-b92c-88afeaa8a8c1" containerName="registry-server" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.009404 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.014443 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.015209 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.022980 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-2tk7t" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.024250 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.041154 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.044162 5043 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.149742 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.150144 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zmdk\" (UniqueName: \"kubernetes.io/projected/78329cf6-223a-4efb-9b86-1bc180f80cb1-kube-api-access-4zmdk\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.150182 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/78329cf6-223a-4efb-9b86-1bc180f80cb1-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.150210 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/78329cf6-223a-4efb-9b86-1bc180f80cb1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.150232 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.150273 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/78329cf6-223a-4efb-9b86-1bc180f80cb1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.150337 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.150377 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.150450 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78329cf6-223a-4efb-9b86-1bc180f80cb1-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.150506 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.253130 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zmdk\" (UniqueName: \"kubernetes.io/projected/78329cf6-223a-4efb-9b86-1bc180f80cb1-kube-api-access-4zmdk\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.253220 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/78329cf6-223a-4efb-9b86-1bc180f80cb1-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.253264 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/78329cf6-223a-4efb-9b86-1bc180f80cb1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.253302 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.253335 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/78329cf6-223a-4efb-9b86-1bc180f80cb1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.253373 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.253430 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.253488 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78329cf6-223a-4efb-9b86-1bc180f80cb1-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.253559 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.253659 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.253962 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.254806 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/78329cf6-223a-4efb-9b86-1bc180f80cb1-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.254829 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/78329cf6-223a-4efb-9b86-1bc180f80cb1-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.254997 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/78329cf6-223a-4efb-9b86-1bc180f80cb1-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.256567 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78329cf6-223a-4efb-9b86-1bc180f80cb1-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.258542 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.259544 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.260087 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.261472 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.282751 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zmdk\" (UniqueName: \"kubernetes.io/projected/78329cf6-223a-4efb-9b86-1bc180f80cb1-kube-api-access-4zmdk\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.306656 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.351764 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:21:42 crc kubenswrapper[5043]: I1125 09:21:42.986343 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Nov 25 09:21:43 crc kubenswrapper[5043]: I1125 09:21:43.938640 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"78329cf6-223a-4efb-9b86-1bc180f80cb1","Type":"ContainerStarted","Data":"9cf187772abc3515211ed6dd20f97e367aab1d4c474bb20eb667f11407a6e1dc"} Nov 25 09:21:44 crc kubenswrapper[5043]: I1125 09:21:44.950332 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"78329cf6-223a-4efb-9b86-1bc180f80cb1","Type":"ContainerStarted","Data":"b04cc7bb2a7c3028f8ff0f32d20b05f644b33eb2f6dd6203309910382cf1e8b8"} Nov 25 09:21:44 crc kubenswrapper[5043]: I1125 09:21:44.975185 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-test" podStartSLOduration=3.975162911 podStartE2EDuration="3.975162911s" podCreationTimestamp="2025-11-25 09:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:21:44.974589656 +0000 UTC m=+7569.142785377" watchObservedRunningTime="2025-11-25 09:21:44.975162911 +0000 UTC m=+7569.143358642" Nov 25 09:21:47 crc kubenswrapper[5043]: I1125 09:21:47.276378 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:21:47 crc kubenswrapper[5043]: I1125 09:21:47.278718 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:22:17 crc kubenswrapper[5043]: I1125 09:22:17.275964 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:22:17 crc kubenswrapper[5043]: I1125 09:22:17.276591 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:22:17 crc kubenswrapper[5043]: I1125 09:22:17.276664 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 09:22:17 crc kubenswrapper[5043]: I1125 09:22:17.277529 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0afa848975da0f97b5a2ea64e94d54fb56333cb06f89ce4e23e4ab51a6ca191"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:22:17 crc kubenswrapper[5043]: I1125 09:22:17.277596 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://b0afa848975da0f97b5a2ea64e94d54fb56333cb06f89ce4e23e4ab51a6ca191" gracePeriod=600 Nov 25 09:22:18 crc kubenswrapper[5043]: I1125 09:22:18.290416 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="b0afa848975da0f97b5a2ea64e94d54fb56333cb06f89ce4e23e4ab51a6ca191" exitCode=0 Nov 25 09:22:18 crc kubenswrapper[5043]: I1125 09:22:18.290460 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"b0afa848975da0f97b5a2ea64e94d54fb56333cb06f89ce4e23e4ab51a6ca191"} Nov 25 09:22:18 crc kubenswrapper[5043]: I1125 09:22:18.290879 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd"} Nov 25 09:22:18 crc kubenswrapper[5043]: I1125 09:22:18.290900 5043 scope.go:117] "RemoveContainer" containerID="c8efa5b0202c05b8f65e831abc6b0be7967bfb526575a8f57692efd4aec6de48" Nov 25 09:23:41 crc kubenswrapper[5043]: I1125 09:23:41.702480 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t8h78"] Nov 25 09:23:41 crc kubenswrapper[5043]: I1125 09:23:41.706689 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8h78" Nov 25 09:23:41 crc kubenswrapper[5043]: I1125 09:23:41.720733 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8h78"] Nov 25 09:23:41 crc kubenswrapper[5043]: I1125 09:23:41.781066 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c87dea4-008f-4f07-b16b-57c158ad1734-catalog-content\") pod \"certified-operators-t8h78\" (UID: \"6c87dea4-008f-4f07-b16b-57c158ad1734\") " pod="openshift-marketplace/certified-operators-t8h78" Nov 25 09:23:41 crc kubenswrapper[5043]: I1125 09:23:41.781126 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwswz\" (UniqueName: \"kubernetes.io/projected/6c87dea4-008f-4f07-b16b-57c158ad1734-kube-api-access-dwswz\") pod \"certified-operators-t8h78\" (UID: \"6c87dea4-008f-4f07-b16b-57c158ad1734\") " pod="openshift-marketplace/certified-operators-t8h78" Nov 25 09:23:41 crc kubenswrapper[5043]: I1125 09:23:41.781155 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c87dea4-008f-4f07-b16b-57c158ad1734-utilities\") pod \"certified-operators-t8h78\" (UID: \"6c87dea4-008f-4f07-b16b-57c158ad1734\") " pod="openshift-marketplace/certified-operators-t8h78" Nov 25 09:23:41 crc kubenswrapper[5043]: I1125 09:23:41.883480 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c87dea4-008f-4f07-b16b-57c158ad1734-catalog-content\") pod \"certified-operators-t8h78\" (UID: \"6c87dea4-008f-4f07-b16b-57c158ad1734\") " pod="openshift-marketplace/certified-operators-t8h78" Nov 25 09:23:41 crc kubenswrapper[5043]: I1125 09:23:41.883844 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwswz\" (UniqueName: \"kubernetes.io/projected/6c87dea4-008f-4f07-b16b-57c158ad1734-kube-api-access-dwswz\") pod \"certified-operators-t8h78\" (UID: \"6c87dea4-008f-4f07-b16b-57c158ad1734\") " pod="openshift-marketplace/certified-operators-t8h78" Nov 25 09:23:41 crc kubenswrapper[5043]: I1125 09:23:41.883875 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c87dea4-008f-4f07-b16b-57c158ad1734-utilities\") pod \"certified-operators-t8h78\" (UID: \"6c87dea4-008f-4f07-b16b-57c158ad1734\") " pod="openshift-marketplace/certified-operators-t8h78" Nov 25 09:23:41 crc kubenswrapper[5043]: I1125 09:23:41.884177 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c87dea4-008f-4f07-b16b-57c158ad1734-catalog-content\") pod \"certified-operators-t8h78\" (UID: \"6c87dea4-008f-4f07-b16b-57c158ad1734\") " pod="openshift-marketplace/certified-operators-t8h78" Nov 25 09:23:41 crc kubenswrapper[5043]: I1125 09:23:41.884300 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c87dea4-008f-4f07-b16b-57c158ad1734-utilities\") pod \"certified-operators-t8h78\" (UID: \"6c87dea4-008f-4f07-b16b-57c158ad1734\") " pod="openshift-marketplace/certified-operators-t8h78" Nov 25 09:23:41 crc kubenswrapper[5043]: I1125 09:23:41.903516 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwswz\" (UniqueName: \"kubernetes.io/projected/6c87dea4-008f-4f07-b16b-57c158ad1734-kube-api-access-dwswz\") pod \"certified-operators-t8h78\" (UID: \"6c87dea4-008f-4f07-b16b-57c158ad1734\") " pod="openshift-marketplace/certified-operators-t8h78" Nov 25 09:23:42 crc kubenswrapper[5043]: I1125 09:23:42.034386 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8h78" Nov 25 09:23:42 crc kubenswrapper[5043]: I1125 09:23:42.566676 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8h78"] Nov 25 09:23:43 crc kubenswrapper[5043]: I1125 09:23:43.218674 5043 generic.go:334] "Generic (PLEG): container finished" podID="6c87dea4-008f-4f07-b16b-57c158ad1734" containerID="5f48e2c7ffe6a17fb6d64a940868e080e22a1b3c6830465b8530fc99418d28d4" exitCode=0 Nov 25 09:23:43 crc kubenswrapper[5043]: I1125 09:23:43.218766 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8h78" event={"ID":"6c87dea4-008f-4f07-b16b-57c158ad1734","Type":"ContainerDied","Data":"5f48e2c7ffe6a17fb6d64a940868e080e22a1b3c6830465b8530fc99418d28d4"} Nov 25 09:23:43 crc kubenswrapper[5043]: I1125 09:23:43.219089 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8h78" event={"ID":"6c87dea4-008f-4f07-b16b-57c158ad1734","Type":"ContainerStarted","Data":"cb06f3e79cef43e0e11dc9b46b070d4618ddfe7206cd06c387a209787c43fe09"} Nov 25 09:23:44 crc kubenswrapper[5043]: I1125 09:23:44.234400 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8h78" event={"ID":"6c87dea4-008f-4f07-b16b-57c158ad1734","Type":"ContainerStarted","Data":"13fb6e44ba6ad0da62cb7c1e5243fde61d0a61a13ed030dd69f1066b2533baec"} Nov 25 09:23:45 crc kubenswrapper[5043]: I1125 09:23:45.245751 5043 generic.go:334] "Generic (PLEG): container finished" podID="6c87dea4-008f-4f07-b16b-57c158ad1734" containerID="13fb6e44ba6ad0da62cb7c1e5243fde61d0a61a13ed030dd69f1066b2533baec" exitCode=0 Nov 25 09:23:45 crc kubenswrapper[5043]: I1125 09:23:45.245869 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8h78" event={"ID":"6c87dea4-008f-4f07-b16b-57c158ad1734","Type":"ContainerDied","Data":"13fb6e44ba6ad0da62cb7c1e5243fde61d0a61a13ed030dd69f1066b2533baec"} Nov 25 09:23:46 crc kubenswrapper[5043]: I1125 09:23:46.256283 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8h78" event={"ID":"6c87dea4-008f-4f07-b16b-57c158ad1734","Type":"ContainerStarted","Data":"f8979b58decec52ddaa79869220ac4c8c9ef19114d765fcc38f9b662d2d79fbc"} Nov 25 09:23:46 crc kubenswrapper[5043]: I1125 09:23:46.291303 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t8h78" podStartSLOduration=2.8389323600000003 podStartE2EDuration="5.291282606s" podCreationTimestamp="2025-11-25 09:23:41 +0000 UTC" firstStartedPulling="2025-11-25 09:23:43.220731748 +0000 UTC m=+7687.388927469" lastFinishedPulling="2025-11-25 09:23:45.673081994 +0000 UTC m=+7689.841277715" observedRunningTime="2025-11-25 09:23:46.278233973 +0000 UTC m=+7690.446429724" watchObservedRunningTime="2025-11-25 09:23:46.291282606 +0000 UTC m=+7690.459478327" Nov 25 09:23:52 crc kubenswrapper[5043]: I1125 09:23:52.035460 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t8h78" Nov 25 09:23:52 crc kubenswrapper[5043]: I1125 09:23:52.036080 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t8h78" Nov 25 09:23:52 crc kubenswrapper[5043]: I1125 09:23:52.089504 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t8h78" Nov 25 09:23:52 crc kubenswrapper[5043]: I1125 09:23:52.365561 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t8h78" Nov 25 09:23:52 crc kubenswrapper[5043]: I1125 09:23:52.423770 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t8h78"] Nov 25 09:23:54 crc kubenswrapper[5043]: I1125 09:23:54.335824 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t8h78" podUID="6c87dea4-008f-4f07-b16b-57c158ad1734" containerName="registry-server" containerID="cri-o://f8979b58decec52ddaa79869220ac4c8c9ef19114d765fcc38f9b662d2d79fbc" gracePeriod=2 Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:54.810162 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8h78" Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:54.964500 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwswz\" (UniqueName: \"kubernetes.io/projected/6c87dea4-008f-4f07-b16b-57c158ad1734-kube-api-access-dwswz\") pod \"6c87dea4-008f-4f07-b16b-57c158ad1734\" (UID: \"6c87dea4-008f-4f07-b16b-57c158ad1734\") " Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:54.964816 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c87dea4-008f-4f07-b16b-57c158ad1734-utilities\") pod \"6c87dea4-008f-4f07-b16b-57c158ad1734\" (UID: \"6c87dea4-008f-4f07-b16b-57c158ad1734\") " Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:54.965012 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c87dea4-008f-4f07-b16b-57c158ad1734-catalog-content\") pod \"6c87dea4-008f-4f07-b16b-57c158ad1734\" (UID: \"6c87dea4-008f-4f07-b16b-57c158ad1734\") " Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:54.965575 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c87dea4-008f-4f07-b16b-57c158ad1734-utilities" (OuterVolumeSpecName: "utilities") pod "6c87dea4-008f-4f07-b16b-57c158ad1734" (UID: "6c87dea4-008f-4f07-b16b-57c158ad1734"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:54.971054 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c87dea4-008f-4f07-b16b-57c158ad1734-kube-api-access-dwswz" (OuterVolumeSpecName: "kube-api-access-dwswz") pod "6c87dea4-008f-4f07-b16b-57c158ad1734" (UID: "6c87dea4-008f-4f07-b16b-57c158ad1734"). InnerVolumeSpecName "kube-api-access-dwswz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.017341 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c87dea4-008f-4f07-b16b-57c158ad1734-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c87dea4-008f-4f07-b16b-57c158ad1734" (UID: "6c87dea4-008f-4f07-b16b-57c158ad1734"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.066977 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c87dea4-008f-4f07-b16b-57c158ad1734-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.067002 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwswz\" (UniqueName: \"kubernetes.io/projected/6c87dea4-008f-4f07-b16b-57c158ad1734-kube-api-access-dwswz\") on node \"crc\" DevicePath \"\"" Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.067095 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c87dea4-008f-4f07-b16b-57c158ad1734-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.347809 5043 generic.go:334] "Generic (PLEG): container finished" podID="6c87dea4-008f-4f07-b16b-57c158ad1734" containerID="f8979b58decec52ddaa79869220ac4c8c9ef19114d765fcc38f9b662d2d79fbc" exitCode=0 Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.348029 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8h78" event={"ID":"6c87dea4-008f-4f07-b16b-57c158ad1734","Type":"ContainerDied","Data":"f8979b58decec52ddaa79869220ac4c8c9ef19114d765fcc38f9b662d2d79fbc"} Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.348126 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8h78" Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.348146 5043 scope.go:117] "RemoveContainer" containerID="f8979b58decec52ddaa79869220ac4c8c9ef19114d765fcc38f9b662d2d79fbc" Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.348129 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8h78" event={"ID":"6c87dea4-008f-4f07-b16b-57c158ad1734","Type":"ContainerDied","Data":"cb06f3e79cef43e0e11dc9b46b070d4618ddfe7206cd06c387a209787c43fe09"} Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.379856 5043 scope.go:117] "RemoveContainer" containerID="13fb6e44ba6ad0da62cb7c1e5243fde61d0a61a13ed030dd69f1066b2533baec" Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.394800 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t8h78"] Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.406527 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t8h78"] Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.416031 5043 scope.go:117] "RemoveContainer" containerID="5f48e2c7ffe6a17fb6d64a940868e080e22a1b3c6830465b8530fc99418d28d4" Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.440219 5043 scope.go:117] "RemoveContainer" containerID="f8979b58decec52ddaa79869220ac4c8c9ef19114d765fcc38f9b662d2d79fbc" Nov 25 09:23:55 crc kubenswrapper[5043]: E1125 09:23:55.440680 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8979b58decec52ddaa79869220ac4c8c9ef19114d765fcc38f9b662d2d79fbc\": container with ID starting with f8979b58decec52ddaa79869220ac4c8c9ef19114d765fcc38f9b662d2d79fbc not found: ID does not exist" containerID="f8979b58decec52ddaa79869220ac4c8c9ef19114d765fcc38f9b662d2d79fbc" Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.440720 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8979b58decec52ddaa79869220ac4c8c9ef19114d765fcc38f9b662d2d79fbc"} err="failed to get container status \"f8979b58decec52ddaa79869220ac4c8c9ef19114d765fcc38f9b662d2d79fbc\": rpc error: code = NotFound desc = could not find container \"f8979b58decec52ddaa79869220ac4c8c9ef19114d765fcc38f9b662d2d79fbc\": container with ID starting with f8979b58decec52ddaa79869220ac4c8c9ef19114d765fcc38f9b662d2d79fbc not found: ID does not exist" Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.440745 5043 scope.go:117] "RemoveContainer" containerID="13fb6e44ba6ad0da62cb7c1e5243fde61d0a61a13ed030dd69f1066b2533baec" Nov 25 09:23:55 crc kubenswrapper[5043]: E1125 09:23:55.441100 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13fb6e44ba6ad0da62cb7c1e5243fde61d0a61a13ed030dd69f1066b2533baec\": container with ID starting with 13fb6e44ba6ad0da62cb7c1e5243fde61d0a61a13ed030dd69f1066b2533baec not found: ID does not exist" containerID="13fb6e44ba6ad0da62cb7c1e5243fde61d0a61a13ed030dd69f1066b2533baec" Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.441135 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13fb6e44ba6ad0da62cb7c1e5243fde61d0a61a13ed030dd69f1066b2533baec"} err="failed to get container status \"13fb6e44ba6ad0da62cb7c1e5243fde61d0a61a13ed030dd69f1066b2533baec\": rpc error: code = NotFound desc = could not find container \"13fb6e44ba6ad0da62cb7c1e5243fde61d0a61a13ed030dd69f1066b2533baec\": container with ID starting with 13fb6e44ba6ad0da62cb7c1e5243fde61d0a61a13ed030dd69f1066b2533baec not found: ID does not exist" Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.441159 5043 scope.go:117] "RemoveContainer" containerID="5f48e2c7ffe6a17fb6d64a940868e080e22a1b3c6830465b8530fc99418d28d4" Nov 25 09:23:55 crc kubenswrapper[5043]: E1125 09:23:55.441431 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f48e2c7ffe6a17fb6d64a940868e080e22a1b3c6830465b8530fc99418d28d4\": container with ID starting with 5f48e2c7ffe6a17fb6d64a940868e080e22a1b3c6830465b8530fc99418d28d4 not found: ID does not exist" containerID="5f48e2c7ffe6a17fb6d64a940868e080e22a1b3c6830465b8530fc99418d28d4" Nov 25 09:23:55 crc kubenswrapper[5043]: I1125 09:23:55.441465 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f48e2c7ffe6a17fb6d64a940868e080e22a1b3c6830465b8530fc99418d28d4"} err="failed to get container status \"5f48e2c7ffe6a17fb6d64a940868e080e22a1b3c6830465b8530fc99418d28d4\": rpc error: code = NotFound desc = could not find container \"5f48e2c7ffe6a17fb6d64a940868e080e22a1b3c6830465b8530fc99418d28d4\": container with ID starting with 5f48e2c7ffe6a17fb6d64a940868e080e22a1b3c6830465b8530fc99418d28d4 not found: ID does not exist" Nov 25 09:23:56 crc kubenswrapper[5043]: I1125 09:23:56.982088 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c87dea4-008f-4f07-b16b-57c158ad1734" path="/var/lib/kubelet/pods/6c87dea4-008f-4f07-b16b-57c158ad1734/volumes" Nov 25 09:24:17 crc kubenswrapper[5043]: I1125 09:24:17.276027 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:24:17 crc kubenswrapper[5043]: I1125 09:24:17.276801 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:24:47 crc kubenswrapper[5043]: I1125 09:24:47.276804 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:24:47 crc kubenswrapper[5043]: I1125 09:24:47.277395 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:25:17 crc kubenswrapper[5043]: I1125 09:25:17.276300 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:25:17 crc kubenswrapper[5043]: I1125 09:25:17.277641 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:25:17 crc kubenswrapper[5043]: I1125 09:25:17.277747 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 09:25:17 crc kubenswrapper[5043]: I1125 09:25:17.278480 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:25:17 crc kubenswrapper[5043]: I1125 09:25:17.278624 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" gracePeriod=600 Nov 25 09:25:17 crc kubenswrapper[5043]: E1125 09:25:17.945792 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:25:18 crc kubenswrapper[5043]: I1125 09:25:18.145498 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd"} Nov 25 09:25:18 crc kubenswrapper[5043]: I1125 09:25:18.145558 5043 scope.go:117] "RemoveContainer" containerID="b0afa848975da0f97b5a2ea64e94d54fb56333cb06f89ce4e23e4ab51a6ca191" Nov 25 09:25:18 crc kubenswrapper[5043]: I1125 09:25:18.145453 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" exitCode=0 Nov 25 09:25:18 crc kubenswrapper[5043]: I1125 09:25:18.148230 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:25:18 crc kubenswrapper[5043]: E1125 09:25:18.148820 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:25:30 crc kubenswrapper[5043]: I1125 09:25:30.962495 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:25:30 crc kubenswrapper[5043]: E1125 09:25:30.963289 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:25:41 crc kubenswrapper[5043]: I1125 09:25:41.962560 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:25:41 crc kubenswrapper[5043]: E1125 09:25:41.963322 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:25:55 crc kubenswrapper[5043]: I1125 09:25:55.496748 5043 scope.go:117] "RemoveContainer" containerID="c515b63d83ae71b6c33b4581ecce69d8d76c9da39429fc3029f64311b20af216" Nov 25 09:25:55 crc kubenswrapper[5043]: I1125 09:25:55.522256 5043 scope.go:117] "RemoveContainer" containerID="1425082bb450d18b6e03aecf2b9a050d770aa2e33b264698ad67917b4629d47f" Nov 25 09:25:55 crc kubenswrapper[5043]: I1125 09:25:55.568332 5043 scope.go:117] "RemoveContainer" containerID="2541fe7063557920a9e27035c0f5d7cda669ba71161f080f2af5aac2c94e035d" Nov 25 09:25:55 crc kubenswrapper[5043]: I1125 09:25:55.962955 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:25:55 crc kubenswrapper[5043]: E1125 09:25:55.963568 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:26:10 crc kubenswrapper[5043]: I1125 09:26:10.962326 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:26:10 crc kubenswrapper[5043]: E1125 09:26:10.962937 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:26:21 crc kubenswrapper[5043]: I1125 09:26:21.963039 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:26:21 crc kubenswrapper[5043]: E1125 09:26:21.963821 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:26:33 crc kubenswrapper[5043]: I1125 09:26:33.963169 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:26:33 crc kubenswrapper[5043]: E1125 09:26:33.964031 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:26:41 crc kubenswrapper[5043]: I1125 09:26:41.393286 5043 generic.go:334] "Generic (PLEG): container finished" podID="78329cf6-223a-4efb-9b86-1bc180f80cb1" containerID="b04cc7bb2a7c3028f8ff0f32d20b05f644b33eb2f6dd6203309910382cf1e8b8" exitCode=0 Nov 25 09:26:41 crc kubenswrapper[5043]: I1125 09:26:41.393377 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"78329cf6-223a-4efb-9b86-1bc180f80cb1","Type":"ContainerDied","Data":"b04cc7bb2a7c3028f8ff0f32d20b05f644b33eb2f6dd6203309910382cf1e8b8"} Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.905824 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.978233 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ssh-key\") pod \"78329cf6-223a-4efb-9b86-1bc180f80cb1\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.978309 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/78329cf6-223a-4efb-9b86-1bc180f80cb1-test-operator-ephemeral-temporary\") pod \"78329cf6-223a-4efb-9b86-1bc180f80cb1\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.978376 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"78329cf6-223a-4efb-9b86-1bc180f80cb1\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.978454 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ceph\") pod \"78329cf6-223a-4efb-9b86-1bc180f80cb1\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.978493 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zmdk\" (UniqueName: \"kubernetes.io/projected/78329cf6-223a-4efb-9b86-1bc180f80cb1-kube-api-access-4zmdk\") pod \"78329cf6-223a-4efb-9b86-1bc180f80cb1\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.978538 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ca-certs\") pod \"78329cf6-223a-4efb-9b86-1bc180f80cb1\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.978593 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/78329cf6-223a-4efb-9b86-1bc180f80cb1-openstack-config\") pod \"78329cf6-223a-4efb-9b86-1bc180f80cb1\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.978638 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/78329cf6-223a-4efb-9b86-1bc180f80cb1-test-operator-ephemeral-workdir\") pod \"78329cf6-223a-4efb-9b86-1bc180f80cb1\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.978692 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-openstack-config-secret\") pod \"78329cf6-223a-4efb-9b86-1bc180f80cb1\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.978798 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78329cf6-223a-4efb-9b86-1bc180f80cb1-config-data\") pod \"78329cf6-223a-4efb-9b86-1bc180f80cb1\" (UID: \"78329cf6-223a-4efb-9b86-1bc180f80cb1\") " Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.978869 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78329cf6-223a-4efb-9b86-1bc180f80cb1-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "78329cf6-223a-4efb-9b86-1bc180f80cb1" (UID: "78329cf6-223a-4efb-9b86-1bc180f80cb1"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.979289 5043 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/78329cf6-223a-4efb-9b86-1bc180f80cb1-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.980010 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78329cf6-223a-4efb-9b86-1bc180f80cb1-config-data" (OuterVolumeSpecName: "config-data") pod "78329cf6-223a-4efb-9b86-1bc180f80cb1" (UID: "78329cf6-223a-4efb-9b86-1bc180f80cb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.985462 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "78329cf6-223a-4efb-9b86-1bc180f80cb1" (UID: "78329cf6-223a-4efb-9b86-1bc180f80cb1"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.985905 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ceph" (OuterVolumeSpecName: "ceph") pod "78329cf6-223a-4efb-9b86-1bc180f80cb1" (UID: "78329cf6-223a-4efb-9b86-1bc180f80cb1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.986874 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78329cf6-223a-4efb-9b86-1bc180f80cb1-kube-api-access-4zmdk" (OuterVolumeSpecName: "kube-api-access-4zmdk") pod "78329cf6-223a-4efb-9b86-1bc180f80cb1" (UID: "78329cf6-223a-4efb-9b86-1bc180f80cb1"). InnerVolumeSpecName "kube-api-access-4zmdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:26:42 crc kubenswrapper[5043]: I1125 09:26:42.986988 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78329cf6-223a-4efb-9b86-1bc180f80cb1-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "78329cf6-223a-4efb-9b86-1bc180f80cb1" (UID: "78329cf6-223a-4efb-9b86-1bc180f80cb1"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.013801 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "78329cf6-223a-4efb-9b86-1bc180f80cb1" (UID: "78329cf6-223a-4efb-9b86-1bc180f80cb1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.023304 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "78329cf6-223a-4efb-9b86-1bc180f80cb1" (UID: "78329cf6-223a-4efb-9b86-1bc180f80cb1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.031384 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78329cf6-223a-4efb-9b86-1bc180f80cb1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "78329cf6-223a-4efb-9b86-1bc180f80cb1" (UID: "78329cf6-223a-4efb-9b86-1bc180f80cb1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.032362 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "78329cf6-223a-4efb-9b86-1bc180f80cb1" (UID: "78329cf6-223a-4efb-9b86-1bc180f80cb1"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.081502 5043 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/78329cf6-223a-4efb-9b86-1bc180f80cb1-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.081572 5043 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.081690 5043 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.081708 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.081724 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zmdk\" (UniqueName: \"kubernetes.io/projected/78329cf6-223a-4efb-9b86-1bc180f80cb1-kube-api-access-4zmdk\") on node \"crc\" DevicePath \"\"" Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.081745 5043 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.081761 5043 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/78329cf6-223a-4efb-9b86-1bc180f80cb1-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.081807 5043 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/78329cf6-223a-4efb-9b86-1bc180f80cb1-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.081820 5043 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/78329cf6-223a-4efb-9b86-1bc180f80cb1-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.124533 5043 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.184868 5043 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.415912 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"78329cf6-223a-4efb-9b86-1bc180f80cb1","Type":"ContainerDied","Data":"9cf187772abc3515211ed6dd20f97e367aab1d4c474bb20eb667f11407a6e1dc"} Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.416316 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cf187772abc3515211ed6dd20f97e367aab1d4c474bb20eb667f11407a6e1dc" Nov 25 09:26:43 crc kubenswrapper[5043]: I1125 09:26:43.416007 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Nov 25 09:26:45 crc kubenswrapper[5043]: I1125 09:26:45.963999 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:26:45 crc kubenswrapper[5043]: E1125 09:26:45.964695 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.564296 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 09:26:46 crc kubenswrapper[5043]: E1125 09:26:46.565086 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c87dea4-008f-4f07-b16b-57c158ad1734" containerName="registry-server" Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.565108 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c87dea4-008f-4f07-b16b-57c158ad1734" containerName="registry-server" Nov 25 09:26:46 crc kubenswrapper[5043]: E1125 09:26:46.565125 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c87dea4-008f-4f07-b16b-57c158ad1734" containerName="extract-utilities" Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.565134 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c87dea4-008f-4f07-b16b-57c158ad1734" containerName="extract-utilities" Nov 25 09:26:46 crc kubenswrapper[5043]: E1125 09:26:46.565165 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c87dea4-008f-4f07-b16b-57c158ad1734" containerName="extract-content" Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.565172 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c87dea4-008f-4f07-b16b-57c158ad1734" containerName="extract-content" Nov 25 09:26:46 crc kubenswrapper[5043]: E1125 09:26:46.565191 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78329cf6-223a-4efb-9b86-1bc180f80cb1" containerName="tempest-tests-tempest-tests-runner" Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.565198 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="78329cf6-223a-4efb-9b86-1bc180f80cb1" containerName="tempest-tests-tempest-tests-runner" Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.565667 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="78329cf6-223a-4efb-9b86-1bc180f80cb1" containerName="tempest-tests-tempest-tests-runner" Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.565718 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c87dea4-008f-4f07-b16b-57c158ad1734" containerName="registry-server" Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.566397 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.570593 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-2tk7t" Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.581851 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.688802 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"64c84afc-13c0-4c4e-a82d-e3c9f7014388\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.689254 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5srd8\" (UniqueName: \"kubernetes.io/projected/64c84afc-13c0-4c4e-a82d-e3c9f7014388-kube-api-access-5srd8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"64c84afc-13c0-4c4e-a82d-e3c9f7014388\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.791052 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"64c84afc-13c0-4c4e-a82d-e3c9f7014388\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.791201 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5srd8\" (UniqueName: \"kubernetes.io/projected/64c84afc-13c0-4c4e-a82d-e3c9f7014388-kube-api-access-5srd8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"64c84afc-13c0-4c4e-a82d-e3c9f7014388\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.791561 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"64c84afc-13c0-4c4e-a82d-e3c9f7014388\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.811653 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5srd8\" (UniqueName: \"kubernetes.io/projected/64c84afc-13c0-4c4e-a82d-e3c9f7014388-kube-api-access-5srd8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"64c84afc-13c0-4c4e-a82d-e3c9f7014388\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.818429 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"64c84afc-13c0-4c4e-a82d-e3c9f7014388\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:26:46 crc kubenswrapper[5043]: I1125 09:26:46.892828 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 09:26:47 crc kubenswrapper[5043]: I1125 09:26:47.385752 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 09:26:47 crc kubenswrapper[5043]: W1125 09:26:47.396944 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64c84afc_13c0_4c4e_a82d_e3c9f7014388.slice/crio-057d7b67ce4ac59420b8829a99c206fd1c7e43615e65c05bb461eb236a5d6da9 WatchSource:0}: Error finding container 057d7b67ce4ac59420b8829a99c206fd1c7e43615e65c05bb461eb236a5d6da9: Status 404 returned error can't find the container with id 057d7b67ce4ac59420b8829a99c206fd1c7e43615e65c05bb461eb236a5d6da9 Nov 25 09:26:47 crc kubenswrapper[5043]: I1125 09:26:47.399467 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 09:26:47 crc kubenswrapper[5043]: I1125 09:26:47.453226 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"64c84afc-13c0-4c4e-a82d-e3c9f7014388","Type":"ContainerStarted","Data":"057d7b67ce4ac59420b8829a99c206fd1c7e43615e65c05bb461eb236a5d6da9"} Nov 25 09:26:52 crc kubenswrapper[5043]: I1125 09:26:52.532884 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"64c84afc-13c0-4c4e-a82d-e3c9f7014388","Type":"ContainerStarted","Data":"ea253e685f32e7876d3794eee69856195f6ba32756259e2999010e76398db526"} Nov 25 09:26:52 crc kubenswrapper[5043]: I1125 09:26:52.563149 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.6024865249999998 podStartE2EDuration="6.563123658s" podCreationTimestamp="2025-11-25 09:26:46 +0000 UTC" firstStartedPulling="2025-11-25 09:26:47.399285457 +0000 UTC m=+7871.567481178" lastFinishedPulling="2025-11-25 09:26:51.35992259 +0000 UTC m=+7875.528118311" observedRunningTime="2025-11-25 09:26:52.547649 +0000 UTC m=+7876.715844811" watchObservedRunningTime="2025-11-25 09:26:52.563123658 +0000 UTC m=+7876.731319379" Nov 25 09:26:59 crc kubenswrapper[5043]: I1125 09:26:59.963310 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:26:59 crc kubenswrapper[5043]: E1125 09:26:59.964432 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.335693 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.337563 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.341291 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"test-operator-clouds-config" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.341372 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-config" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.341430 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-public-key" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.341497 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-private-key" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.342596 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"tobiko-secret" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.350278 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.416112 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.416327 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.416397 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbhmb\" (UniqueName: \"kubernetes.io/projected/13b66c56-5aa0-42ff-a574-26ec881f2e64-kube-api-access-nbhmb\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.416632 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.416757 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.416895 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.417024 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.417102 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.417159 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.417207 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.417354 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.417397 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.518928 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.519010 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.519064 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.519100 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.519126 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.519154 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.519209 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.519236 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.519291 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.519331 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.519358 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbhmb\" (UniqueName: \"kubernetes.io/projected/13b66c56-5aa0-42ff-a574-26ec881f2e64-kube-api-access-nbhmb\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.519401 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.519662 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.519680 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.519846 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.520814 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.520815 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.521154 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.522346 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.525164 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.525698 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.528684 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.528893 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.543246 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbhmb\" (UniqueName: \"kubernetes.io/projected/13b66c56-5aa0-42ff-a574-26ec881f2e64-kube-api-access-nbhmb\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.547381 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:11 crc kubenswrapper[5043]: I1125 09:27:11.661471 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:27:12 crc kubenswrapper[5043]: I1125 09:27:12.260195 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Nov 25 09:27:12 crc kubenswrapper[5043]: I1125 09:27:12.750072 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"13b66c56-5aa0-42ff-a574-26ec881f2e64","Type":"ContainerStarted","Data":"b343c1262012411deea403e19e9e10891ab750fcdfb298797f8155235bee66ad"} Nov 25 09:27:13 crc kubenswrapper[5043]: I1125 09:27:13.963157 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:27:13 crc kubenswrapper[5043]: E1125 09:27:13.963727 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:27:25 crc kubenswrapper[5043]: I1125 09:27:25.963346 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:27:25 crc kubenswrapper[5043]: E1125 09:27:25.964924 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:27:27 crc kubenswrapper[5043]: I1125 09:27:27.895871 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"13b66c56-5aa0-42ff-a574-26ec881f2e64","Type":"ContainerStarted","Data":"4a36243f7b4eb8977d920a3aedcd6e455f77974cb3172f6f0db2065a80875bbf"} Nov 25 09:27:27 crc kubenswrapper[5043]: I1125 09:27:27.920713 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" podStartSLOduration=3.231786387 podStartE2EDuration="17.920694717s" podCreationTimestamp="2025-11-25 09:27:10 +0000 UTC" firstStartedPulling="2025-11-25 09:27:12.261320473 +0000 UTC m=+7896.429516194" lastFinishedPulling="2025-11-25 09:27:26.950228773 +0000 UTC m=+7911.118424524" observedRunningTime="2025-11-25 09:27:27.917501071 +0000 UTC m=+7912.085696832" watchObservedRunningTime="2025-11-25 09:27:27.920694717 +0000 UTC m=+7912.088890438" Nov 25 09:27:39 crc kubenswrapper[5043]: I1125 09:27:39.963285 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:27:39 crc kubenswrapper[5043]: E1125 09:27:39.964026 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:27:50 crc kubenswrapper[5043]: I1125 09:27:50.963108 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:27:50 crc kubenswrapper[5043]: E1125 09:27:50.964055 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:28:05 crc kubenswrapper[5043]: I1125 09:28:05.962762 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:28:05 crc kubenswrapper[5043]: E1125 09:28:05.963522 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:28:20 crc kubenswrapper[5043]: I1125 09:28:20.963779 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:28:20 crc kubenswrapper[5043]: E1125 09:28:20.964410 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:28:33 crc kubenswrapper[5043]: I1125 09:28:33.963403 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:28:33 crc kubenswrapper[5043]: E1125 09:28:33.964333 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:28:45 crc kubenswrapper[5043]: I1125 09:28:45.962883 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:28:45 crc kubenswrapper[5043]: E1125 09:28:45.963812 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:28:52 crc kubenswrapper[5043]: I1125 09:28:52.824688 5043 generic.go:334] "Generic (PLEG): container finished" podID="13b66c56-5aa0-42ff-a574-26ec881f2e64" containerID="4a36243f7b4eb8977d920a3aedcd6e455f77974cb3172f6f0db2065a80875bbf" exitCode=0 Nov 25 09:28:52 crc kubenswrapper[5043]: I1125 09:28:52.824897 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"13b66c56-5aa0-42ff-a574-26ec881f2e64","Type":"ContainerDied","Data":"4a36243f7b4eb8977d920a3aedcd6e455f77974cb3172f6f0db2065a80875bbf"} Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.335214 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.420408 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Nov 25 09:28:54 crc kubenswrapper[5043]: E1125 09:28:54.421333 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b66c56-5aa0-42ff-a574-26ec881f2e64" containerName="tobiko-tests-tobiko" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.421356 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b66c56-5aa0-42ff-a574-26ec881f2e64" containerName="tobiko-tests-tobiko" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.421664 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="13b66c56-5aa0-42ff-a574-26ec881f2e64" containerName="tobiko-tests-tobiko" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.422503 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.431707 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.433737 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-ephemeral-workdir\") pod \"13b66c56-5aa0-42ff-a574-26ec881f2e64\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.433815 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-clouds-config\") pod \"13b66c56-5aa0-42ff-a574-26ec881f2e64\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.433872 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-config\") pod \"13b66c56-5aa0-42ff-a574-26ec881f2e64\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.433909 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"13b66c56-5aa0-42ff-a574-26ec881f2e64\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.434038 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-kubeconfig\") pod \"13b66c56-5aa0-42ff-a574-26ec881f2e64\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.434069 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbhmb\" (UniqueName: \"kubernetes.io/projected/13b66c56-5aa0-42ff-a574-26ec881f2e64-kube-api-access-nbhmb\") pod \"13b66c56-5aa0-42ff-a574-26ec881f2e64\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.434132 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-openstack-config-secret\") pod \"13b66c56-5aa0-42ff-a574-26ec881f2e64\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.434178 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-public-key\") pod \"13b66c56-5aa0-42ff-a574-26ec881f2e64\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.434285 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-ephemeral-temporary\") pod \"13b66c56-5aa0-42ff-a574-26ec881f2e64\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.434334 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-private-key\") pod \"13b66c56-5aa0-42ff-a574-26ec881f2e64\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.434379 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-ceph\") pod \"13b66c56-5aa0-42ff-a574-26ec881f2e64\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.434400 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-ca-certs\") pod \"13b66c56-5aa0-42ff-a574-26ec881f2e64\" (UID: \"13b66c56-5aa0-42ff-a574-26ec881f2e64\") " Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.436959 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "13b66c56-5aa0-42ff-a574-26ec881f2e64" (UID: "13b66c56-5aa0-42ff-a574-26ec881f2e64"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.455881 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "13b66c56-5aa0-42ff-a574-26ec881f2e64" (UID: "13b66c56-5aa0-42ff-a574-26ec881f2e64"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.456110 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b66c56-5aa0-42ff-a574-26ec881f2e64-kube-api-access-nbhmb" (OuterVolumeSpecName: "kube-api-access-nbhmb") pod "13b66c56-5aa0-42ff-a574-26ec881f2e64" (UID: "13b66c56-5aa0-42ff-a574-26ec881f2e64"). InnerVolumeSpecName "kube-api-access-nbhmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.463652 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-ceph" (OuterVolumeSpecName: "ceph") pod "13b66c56-5aa0-42ff-a574-26ec881f2e64" (UID: "13b66c56-5aa0-42ff-a574-26ec881f2e64"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.479665 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-config" (OuterVolumeSpecName: "tobiko-config") pod "13b66c56-5aa0-42ff-a574-26ec881f2e64" (UID: "13b66c56-5aa0-42ff-a574-26ec881f2e64"). InnerVolumeSpecName "tobiko-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.485985 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "13b66c56-5aa0-42ff-a574-26ec881f2e64" (UID: "13b66c56-5aa0-42ff-a574-26ec881f2e64"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.490048 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "13b66c56-5aa0-42ff-a574-26ec881f2e64" (UID: "13b66c56-5aa0-42ff-a574-26ec881f2e64"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.496375 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-public-key" (OuterVolumeSpecName: "tobiko-public-key") pod "13b66c56-5aa0-42ff-a574-26ec881f2e64" (UID: "13b66c56-5aa0-42ff-a574-26ec881f2e64"). InnerVolumeSpecName "tobiko-public-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.499917 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-private-key" (OuterVolumeSpecName: "tobiko-private-key") pod "13b66c56-5aa0-42ff-a574-26ec881f2e64" (UID: "13b66c56-5aa0-42ff-a574-26ec881f2e64"). InnerVolumeSpecName "tobiko-private-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.503993 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "13b66c56-5aa0-42ff-a574-26ec881f2e64" (UID: "13b66c56-5aa0-42ff-a574-26ec881f2e64"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.504991 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "13b66c56-5aa0-42ff-a574-26ec881f2e64" (UID: "13b66c56-5aa0-42ff-a574-26ec881f2e64"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.539053 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.539113 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.558872 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.558962 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559055 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559197 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559278 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wv8x\" (UniqueName: \"kubernetes.io/projected/d53359ac-2451-479a-bd73-bec83fc39a47-kube-api-access-8wv8x\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559317 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559409 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559534 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559580 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559687 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559783 5043 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559803 5043 reconciler_common.go:293] "Volume detached for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-config\") on node \"crc\" DevicePath \"\"" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559818 5043 reconciler_common.go:293] "Volume detached for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-kubeconfig\") on node \"crc\" DevicePath \"\"" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559832 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbhmb\" (UniqueName: \"kubernetes.io/projected/13b66c56-5aa0-42ff-a574-26ec881f2e64-kube-api-access-nbhmb\") on node \"crc\" DevicePath \"\"" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559846 5043 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559877 5043 reconciler_common.go:293] "Volume detached for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-public-key\") on node \"crc\" DevicePath \"\"" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559889 5043 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559903 5043 reconciler_common.go:293] "Volume detached for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/13b66c56-5aa0-42ff-a574-26ec881f2e64-tobiko-private-key\") on node \"crc\" DevicePath \"\"" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559915 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.559927 5043 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/13b66c56-5aa0-42ff-a574-26ec881f2e64-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.650340 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.661878 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wv8x\" (UniqueName: \"kubernetes.io/projected/d53359ac-2451-479a-bd73-bec83fc39a47-kube-api-access-8wv8x\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.661949 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.662000 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.662065 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.662093 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.662124 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.662143 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.662162 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.662217 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.662245 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.662289 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.663210 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.663443 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.664482 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.666989 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.667732 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.667787 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.675250 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.675475 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.675523 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.678467 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.687316 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wv8x\" (UniqueName: \"kubernetes.io/projected/d53359ac-2451-479a-bd73-bec83fc39a47-kube-api-access-8wv8x\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.845407 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"13b66c56-5aa0-42ff-a574-26ec881f2e64","Type":"ContainerDied","Data":"b343c1262012411deea403e19e9e10891ab750fcdfb298797f8155235bee66ad"} Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.845668 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b343c1262012411deea403e19e9e10891ab750fcdfb298797f8155235bee66ad" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.845470 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Nov 25 09:28:54 crc kubenswrapper[5043]: I1125 09:28:54.920133 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:28:55 crc kubenswrapper[5043]: I1125 09:28:55.425258 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Nov 25 09:28:55 crc kubenswrapper[5043]: I1125 09:28:55.765498 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "13b66c56-5aa0-42ff-a574-26ec881f2e64" (UID: "13b66c56-5aa0-42ff-a574-26ec881f2e64"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:28:55 crc kubenswrapper[5043]: I1125 09:28:55.792553 5043 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/13b66c56-5aa0-42ff-a574-26ec881f2e64-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 09:28:55 crc kubenswrapper[5043]: I1125 09:28:55.854269 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"d53359ac-2451-479a-bd73-bec83fc39a47","Type":"ContainerStarted","Data":"fcfd39b9094a651407d78eb7881676a4cfba4e8b6e5268e9a6f73a2d9563d20c"} Nov 25 09:28:56 crc kubenswrapper[5043]: I1125 09:28:56.864703 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"d53359ac-2451-479a-bd73-bec83fc39a47","Type":"ContainerStarted","Data":"62b97edcff02a309bda37976990198a86ef51b77df6ecc2243abbc0c1d5d3324"} Nov 25 09:28:56 crc kubenswrapper[5043]: I1125 09:28:56.890906 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tobiko-tests-tobiko-s01-sanity" podStartSLOduration=2.890888158 podStartE2EDuration="2.890888158s" podCreationTimestamp="2025-11-25 09:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:28:56.888920155 +0000 UTC m=+8001.057115916" watchObservedRunningTime="2025-11-25 09:28:56.890888158 +0000 UTC m=+8001.059083879" Nov 25 09:29:00 crc kubenswrapper[5043]: I1125 09:29:00.963243 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:29:00 crc kubenswrapper[5043]: E1125 09:29:00.964107 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:29:06 crc kubenswrapper[5043]: I1125 09:29:06.041028 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mtgrr"] Nov 25 09:29:06 crc kubenswrapper[5043]: I1125 09:29:06.044801 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtgrr" Nov 25 09:29:06 crc kubenswrapper[5043]: I1125 09:29:06.060588 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mtgrr"] Nov 25 09:29:06 crc kubenswrapper[5043]: I1125 09:29:06.208096 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8146feb0-1208-4d8a-9cb1-2aad765707c8-utilities\") pod \"redhat-operators-mtgrr\" (UID: \"8146feb0-1208-4d8a-9cb1-2aad765707c8\") " pod="openshift-marketplace/redhat-operators-mtgrr" Nov 25 09:29:06 crc kubenswrapper[5043]: I1125 09:29:06.208569 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8146feb0-1208-4d8a-9cb1-2aad765707c8-catalog-content\") pod \"redhat-operators-mtgrr\" (UID: \"8146feb0-1208-4d8a-9cb1-2aad765707c8\") " pod="openshift-marketplace/redhat-operators-mtgrr" Nov 25 09:29:06 crc kubenswrapper[5043]: I1125 09:29:06.208594 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w69ck\" (UniqueName: \"kubernetes.io/projected/8146feb0-1208-4d8a-9cb1-2aad765707c8-kube-api-access-w69ck\") pod \"redhat-operators-mtgrr\" (UID: \"8146feb0-1208-4d8a-9cb1-2aad765707c8\") " pod="openshift-marketplace/redhat-operators-mtgrr" Nov 25 09:29:06 crc kubenswrapper[5043]: I1125 09:29:06.310324 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8146feb0-1208-4d8a-9cb1-2aad765707c8-catalog-content\") pod \"redhat-operators-mtgrr\" (UID: \"8146feb0-1208-4d8a-9cb1-2aad765707c8\") " pod="openshift-marketplace/redhat-operators-mtgrr" Nov 25 09:29:06 crc kubenswrapper[5043]: I1125 09:29:06.310672 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w69ck\" (UniqueName: \"kubernetes.io/projected/8146feb0-1208-4d8a-9cb1-2aad765707c8-kube-api-access-w69ck\") pod \"redhat-operators-mtgrr\" (UID: \"8146feb0-1208-4d8a-9cb1-2aad765707c8\") " pod="openshift-marketplace/redhat-operators-mtgrr" Nov 25 09:29:06 crc kubenswrapper[5043]: I1125 09:29:06.310840 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8146feb0-1208-4d8a-9cb1-2aad765707c8-utilities\") pod \"redhat-operators-mtgrr\" (UID: \"8146feb0-1208-4d8a-9cb1-2aad765707c8\") " pod="openshift-marketplace/redhat-operators-mtgrr" Nov 25 09:29:06 crc kubenswrapper[5043]: I1125 09:29:06.310890 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8146feb0-1208-4d8a-9cb1-2aad765707c8-catalog-content\") pod \"redhat-operators-mtgrr\" (UID: \"8146feb0-1208-4d8a-9cb1-2aad765707c8\") " pod="openshift-marketplace/redhat-operators-mtgrr" Nov 25 09:29:06 crc kubenswrapper[5043]: I1125 09:29:06.311151 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8146feb0-1208-4d8a-9cb1-2aad765707c8-utilities\") pod \"redhat-operators-mtgrr\" (UID: \"8146feb0-1208-4d8a-9cb1-2aad765707c8\") " pod="openshift-marketplace/redhat-operators-mtgrr" Nov 25 09:29:06 crc kubenswrapper[5043]: I1125 09:29:06.332693 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w69ck\" (UniqueName: \"kubernetes.io/projected/8146feb0-1208-4d8a-9cb1-2aad765707c8-kube-api-access-w69ck\") pod \"redhat-operators-mtgrr\" (UID: \"8146feb0-1208-4d8a-9cb1-2aad765707c8\") " pod="openshift-marketplace/redhat-operators-mtgrr" Nov 25 09:29:06 crc kubenswrapper[5043]: I1125 09:29:06.375821 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtgrr" Nov 25 09:29:06 crc kubenswrapper[5043]: I1125 09:29:06.952169 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mtgrr"] Nov 25 09:29:06 crc kubenswrapper[5043]: I1125 09:29:06.986383 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtgrr" event={"ID":"8146feb0-1208-4d8a-9cb1-2aad765707c8","Type":"ContainerStarted","Data":"2f2cc6cdee37799ac4c45b27e8a3f3b69d867d2735e630c758602d3df6108e92"} Nov 25 09:29:07 crc kubenswrapper[5043]: I1125 09:29:07.977983 5043 generic.go:334] "Generic (PLEG): container finished" podID="8146feb0-1208-4d8a-9cb1-2aad765707c8" containerID="079445bb0a5611ceca8c9cbda4b0f4b891e14534d7cc5c4971c4e5ad9c82f71a" exitCode=0 Nov 25 09:29:07 crc kubenswrapper[5043]: I1125 09:29:07.978073 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtgrr" event={"ID":"8146feb0-1208-4d8a-9cb1-2aad765707c8","Type":"ContainerDied","Data":"079445bb0a5611ceca8c9cbda4b0f4b891e14534d7cc5c4971c4e5ad9c82f71a"} Nov 25 09:29:10 crc kubenswrapper[5043]: I1125 09:29:10.000590 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtgrr" event={"ID":"8146feb0-1208-4d8a-9cb1-2aad765707c8","Type":"ContainerStarted","Data":"58429e39c595c5ce0577643c190e82d4f1c49c41fceef70746853eab3d5825d7"} Nov 25 09:29:13 crc kubenswrapper[5043]: I1125 09:29:13.038090 5043 generic.go:334] "Generic (PLEG): container finished" podID="8146feb0-1208-4d8a-9cb1-2aad765707c8" containerID="58429e39c595c5ce0577643c190e82d4f1c49c41fceef70746853eab3d5825d7" exitCode=0 Nov 25 09:29:13 crc kubenswrapper[5043]: I1125 09:29:13.038169 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtgrr" event={"ID":"8146feb0-1208-4d8a-9cb1-2aad765707c8","Type":"ContainerDied","Data":"58429e39c595c5ce0577643c190e82d4f1c49c41fceef70746853eab3d5825d7"} Nov 25 09:29:14 crc kubenswrapper[5043]: I1125 09:29:14.056190 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtgrr" event={"ID":"8146feb0-1208-4d8a-9cb1-2aad765707c8","Type":"ContainerStarted","Data":"29811add7863224d5a308c8b7301eaeb4e1a4367dd6a5cc15268fe68b2457711"} Nov 25 09:29:14 crc kubenswrapper[5043]: I1125 09:29:14.082288 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mtgrr" podStartSLOduration=2.354786349 podStartE2EDuration="8.082252268s" podCreationTimestamp="2025-11-25 09:29:06 +0000 UTC" firstStartedPulling="2025-11-25 09:29:07.980378219 +0000 UTC m=+8012.148573940" lastFinishedPulling="2025-11-25 09:29:13.707844118 +0000 UTC m=+8017.876039859" observedRunningTime="2025-11-25 09:29:14.073663496 +0000 UTC m=+8018.241859247" watchObservedRunningTime="2025-11-25 09:29:14.082252268 +0000 UTC m=+8018.250448029" Nov 25 09:29:15 crc kubenswrapper[5043]: I1125 09:29:15.963620 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:29:15 crc kubenswrapper[5043]: E1125 09:29:15.964149 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:29:16 crc kubenswrapper[5043]: I1125 09:29:16.376489 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mtgrr" Nov 25 09:29:16 crc kubenswrapper[5043]: I1125 09:29:16.376860 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mtgrr" Nov 25 09:29:17 crc kubenswrapper[5043]: I1125 09:29:17.424166 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mtgrr" podUID="8146feb0-1208-4d8a-9cb1-2aad765707c8" containerName="registry-server" probeResult="failure" output=< Nov 25 09:29:17 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 09:29:17 crc kubenswrapper[5043]: > Nov 25 09:29:26 crc kubenswrapper[5043]: I1125 09:29:26.428323 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mtgrr" Nov 25 09:29:26 crc kubenswrapper[5043]: I1125 09:29:26.486278 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mtgrr" Nov 25 09:29:26 crc kubenswrapper[5043]: I1125 09:29:26.715523 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mtgrr"] Nov 25 09:29:28 crc kubenswrapper[5043]: I1125 09:29:28.196984 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mtgrr" podUID="8146feb0-1208-4d8a-9cb1-2aad765707c8" containerName="registry-server" containerID="cri-o://29811add7863224d5a308c8b7301eaeb4e1a4367dd6a5cc15268fe68b2457711" gracePeriod=2 Nov 25 09:29:28 crc kubenswrapper[5043]: I1125 09:29:28.739475 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtgrr" Nov 25 09:29:28 crc kubenswrapper[5043]: I1125 09:29:28.852934 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w69ck\" (UniqueName: \"kubernetes.io/projected/8146feb0-1208-4d8a-9cb1-2aad765707c8-kube-api-access-w69ck\") pod \"8146feb0-1208-4d8a-9cb1-2aad765707c8\" (UID: \"8146feb0-1208-4d8a-9cb1-2aad765707c8\") " Nov 25 09:29:28 crc kubenswrapper[5043]: I1125 09:29:28.853076 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8146feb0-1208-4d8a-9cb1-2aad765707c8-catalog-content\") pod \"8146feb0-1208-4d8a-9cb1-2aad765707c8\" (UID: \"8146feb0-1208-4d8a-9cb1-2aad765707c8\") " Nov 25 09:29:28 crc kubenswrapper[5043]: I1125 09:29:28.853213 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8146feb0-1208-4d8a-9cb1-2aad765707c8-utilities\") pod \"8146feb0-1208-4d8a-9cb1-2aad765707c8\" (UID: \"8146feb0-1208-4d8a-9cb1-2aad765707c8\") " Nov 25 09:29:28 crc kubenswrapper[5043]: I1125 09:29:28.854047 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8146feb0-1208-4d8a-9cb1-2aad765707c8-utilities" (OuterVolumeSpecName: "utilities") pod "8146feb0-1208-4d8a-9cb1-2aad765707c8" (UID: "8146feb0-1208-4d8a-9cb1-2aad765707c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:29:28 crc kubenswrapper[5043]: I1125 09:29:28.866760 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8146feb0-1208-4d8a-9cb1-2aad765707c8-kube-api-access-w69ck" (OuterVolumeSpecName: "kube-api-access-w69ck") pod "8146feb0-1208-4d8a-9cb1-2aad765707c8" (UID: "8146feb0-1208-4d8a-9cb1-2aad765707c8"). InnerVolumeSpecName "kube-api-access-w69ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:29:28 crc kubenswrapper[5043]: I1125 09:29:28.944392 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8146feb0-1208-4d8a-9cb1-2aad765707c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8146feb0-1208-4d8a-9cb1-2aad765707c8" (UID: "8146feb0-1208-4d8a-9cb1-2aad765707c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:29:28 crc kubenswrapper[5043]: I1125 09:29:28.956015 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8146feb0-1208-4d8a-9cb1-2aad765707c8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:29:28 crc kubenswrapper[5043]: I1125 09:29:28.956050 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8146feb0-1208-4d8a-9cb1-2aad765707c8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:29:28 crc kubenswrapper[5043]: I1125 09:29:28.956060 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w69ck\" (UniqueName: \"kubernetes.io/projected/8146feb0-1208-4d8a-9cb1-2aad765707c8-kube-api-access-w69ck\") on node \"crc\" DevicePath \"\"" Nov 25 09:29:29 crc kubenswrapper[5043]: I1125 09:29:29.208744 5043 generic.go:334] "Generic (PLEG): container finished" podID="8146feb0-1208-4d8a-9cb1-2aad765707c8" containerID="29811add7863224d5a308c8b7301eaeb4e1a4367dd6a5cc15268fe68b2457711" exitCode=0 Nov 25 09:29:29 crc kubenswrapper[5043]: I1125 09:29:29.209104 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtgrr" event={"ID":"8146feb0-1208-4d8a-9cb1-2aad765707c8","Type":"ContainerDied","Data":"29811add7863224d5a308c8b7301eaeb4e1a4367dd6a5cc15268fe68b2457711"} Nov 25 09:29:29 crc kubenswrapper[5043]: I1125 09:29:29.209143 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtgrr" event={"ID":"8146feb0-1208-4d8a-9cb1-2aad765707c8","Type":"ContainerDied","Data":"2f2cc6cdee37799ac4c45b27e8a3f3b69d867d2735e630c758602d3df6108e92"} Nov 25 09:29:29 crc kubenswrapper[5043]: I1125 09:29:29.209165 5043 scope.go:117] "RemoveContainer" containerID="29811add7863224d5a308c8b7301eaeb4e1a4367dd6a5cc15268fe68b2457711" Nov 25 09:29:29 crc kubenswrapper[5043]: I1125 09:29:29.209309 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtgrr" Nov 25 09:29:29 crc kubenswrapper[5043]: I1125 09:29:29.231595 5043 scope.go:117] "RemoveContainer" containerID="58429e39c595c5ce0577643c190e82d4f1c49c41fceef70746853eab3d5825d7" Nov 25 09:29:29 crc kubenswrapper[5043]: I1125 09:29:29.238872 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mtgrr"] Nov 25 09:29:29 crc kubenswrapper[5043]: I1125 09:29:29.248113 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mtgrr"] Nov 25 09:29:29 crc kubenswrapper[5043]: I1125 09:29:29.255557 5043 scope.go:117] "RemoveContainer" containerID="079445bb0a5611ceca8c9cbda4b0f4b891e14534d7cc5c4971c4e5ad9c82f71a" Nov 25 09:29:29 crc kubenswrapper[5043]: I1125 09:29:29.311277 5043 scope.go:117] "RemoveContainer" containerID="29811add7863224d5a308c8b7301eaeb4e1a4367dd6a5cc15268fe68b2457711" Nov 25 09:29:29 crc kubenswrapper[5043]: E1125 09:29:29.312468 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29811add7863224d5a308c8b7301eaeb4e1a4367dd6a5cc15268fe68b2457711\": container with ID starting with 29811add7863224d5a308c8b7301eaeb4e1a4367dd6a5cc15268fe68b2457711 not found: ID does not exist" containerID="29811add7863224d5a308c8b7301eaeb4e1a4367dd6a5cc15268fe68b2457711" Nov 25 09:29:29 crc kubenswrapper[5043]: I1125 09:29:29.312517 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29811add7863224d5a308c8b7301eaeb4e1a4367dd6a5cc15268fe68b2457711"} err="failed to get container status \"29811add7863224d5a308c8b7301eaeb4e1a4367dd6a5cc15268fe68b2457711\": rpc error: code = NotFound desc = could not find container \"29811add7863224d5a308c8b7301eaeb4e1a4367dd6a5cc15268fe68b2457711\": container with ID starting with 29811add7863224d5a308c8b7301eaeb4e1a4367dd6a5cc15268fe68b2457711 not found: ID does not exist" Nov 25 09:29:29 crc kubenswrapper[5043]: I1125 09:29:29.312556 5043 scope.go:117] "RemoveContainer" containerID="58429e39c595c5ce0577643c190e82d4f1c49c41fceef70746853eab3d5825d7" Nov 25 09:29:29 crc kubenswrapper[5043]: E1125 09:29:29.312982 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58429e39c595c5ce0577643c190e82d4f1c49c41fceef70746853eab3d5825d7\": container with ID starting with 58429e39c595c5ce0577643c190e82d4f1c49c41fceef70746853eab3d5825d7 not found: ID does not exist" containerID="58429e39c595c5ce0577643c190e82d4f1c49c41fceef70746853eab3d5825d7" Nov 25 09:29:29 crc kubenswrapper[5043]: I1125 09:29:29.313022 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58429e39c595c5ce0577643c190e82d4f1c49c41fceef70746853eab3d5825d7"} err="failed to get container status \"58429e39c595c5ce0577643c190e82d4f1c49c41fceef70746853eab3d5825d7\": rpc error: code = NotFound desc = could not find container \"58429e39c595c5ce0577643c190e82d4f1c49c41fceef70746853eab3d5825d7\": container with ID starting with 58429e39c595c5ce0577643c190e82d4f1c49c41fceef70746853eab3d5825d7 not found: ID does not exist" Nov 25 09:29:29 crc kubenswrapper[5043]: I1125 09:29:29.313051 5043 scope.go:117] "RemoveContainer" containerID="079445bb0a5611ceca8c9cbda4b0f4b891e14534d7cc5c4971c4e5ad9c82f71a" Nov 25 09:29:29 crc kubenswrapper[5043]: E1125 09:29:29.313400 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079445bb0a5611ceca8c9cbda4b0f4b891e14534d7cc5c4971c4e5ad9c82f71a\": container with ID starting with 079445bb0a5611ceca8c9cbda4b0f4b891e14534d7cc5c4971c4e5ad9c82f71a not found: ID does not exist" containerID="079445bb0a5611ceca8c9cbda4b0f4b891e14534d7cc5c4971c4e5ad9c82f71a" Nov 25 09:29:29 crc kubenswrapper[5043]: I1125 09:29:29.313435 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079445bb0a5611ceca8c9cbda4b0f4b891e14534d7cc5c4971c4e5ad9c82f71a"} err="failed to get container status \"079445bb0a5611ceca8c9cbda4b0f4b891e14534d7cc5c4971c4e5ad9c82f71a\": rpc error: code = NotFound desc = could not find container \"079445bb0a5611ceca8c9cbda4b0f4b891e14534d7cc5c4971c4e5ad9c82f71a\": container with ID starting with 079445bb0a5611ceca8c9cbda4b0f4b891e14534d7cc5c4971c4e5ad9c82f71a not found: ID does not exist" Nov 25 09:29:30 crc kubenswrapper[5043]: I1125 09:29:30.963167 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:29:30 crc kubenswrapper[5043]: E1125 09:29:30.963708 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:29:30 crc kubenswrapper[5043]: I1125 09:29:30.973796 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8146feb0-1208-4d8a-9cb1-2aad765707c8" path="/var/lib/kubelet/pods/8146feb0-1208-4d8a-9cb1-2aad765707c8/volumes" Nov 25 09:29:45 crc kubenswrapper[5043]: I1125 09:29:45.963271 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:29:45 crc kubenswrapper[5043]: E1125 09:29:45.963983 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:29:58 crc kubenswrapper[5043]: I1125 09:29:58.962923 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:29:58 crc kubenswrapper[5043]: E1125 09:29:58.964033 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.162454 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw"] Nov 25 09:30:00 crc kubenswrapper[5043]: E1125 09:30:00.163261 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8146feb0-1208-4d8a-9cb1-2aad765707c8" containerName="registry-server" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.163279 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8146feb0-1208-4d8a-9cb1-2aad765707c8" containerName="registry-server" Nov 25 09:30:00 crc kubenswrapper[5043]: E1125 09:30:00.163310 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8146feb0-1208-4d8a-9cb1-2aad765707c8" containerName="extract-content" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.163319 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8146feb0-1208-4d8a-9cb1-2aad765707c8" containerName="extract-content" Nov 25 09:30:00 crc kubenswrapper[5043]: E1125 09:30:00.163351 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8146feb0-1208-4d8a-9cb1-2aad765707c8" containerName="extract-utilities" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.163360 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="8146feb0-1208-4d8a-9cb1-2aad765707c8" containerName="extract-utilities" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.163588 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="8146feb0-1208-4d8a-9cb1-2aad765707c8" containerName="registry-server" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.164521 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.166554 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.166576 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.180659 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw"] Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.318402 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89gj5\" (UniqueName: \"kubernetes.io/projected/32d84709-cdf9-4f62-868f-b352ef83033a-kube-api-access-89gj5\") pod \"collect-profiles-29401050-2plxw\" (UID: \"32d84709-cdf9-4f62-868f-b352ef83033a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.318639 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32d84709-cdf9-4f62-868f-b352ef83033a-config-volume\") pod \"collect-profiles-29401050-2plxw\" (UID: \"32d84709-cdf9-4f62-868f-b352ef83033a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.318764 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32d84709-cdf9-4f62-868f-b352ef83033a-secret-volume\") pod \"collect-profiles-29401050-2plxw\" (UID: \"32d84709-cdf9-4f62-868f-b352ef83033a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.420515 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89gj5\" (UniqueName: \"kubernetes.io/projected/32d84709-cdf9-4f62-868f-b352ef83033a-kube-api-access-89gj5\") pod \"collect-profiles-29401050-2plxw\" (UID: \"32d84709-cdf9-4f62-868f-b352ef83033a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.420669 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32d84709-cdf9-4f62-868f-b352ef83033a-config-volume\") pod \"collect-profiles-29401050-2plxw\" (UID: \"32d84709-cdf9-4f62-868f-b352ef83033a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.420727 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32d84709-cdf9-4f62-868f-b352ef83033a-secret-volume\") pod \"collect-profiles-29401050-2plxw\" (UID: \"32d84709-cdf9-4f62-868f-b352ef83033a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.422887 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32d84709-cdf9-4f62-868f-b352ef83033a-config-volume\") pod \"collect-profiles-29401050-2plxw\" (UID: \"32d84709-cdf9-4f62-868f-b352ef83033a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.427251 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32d84709-cdf9-4f62-868f-b352ef83033a-secret-volume\") pod \"collect-profiles-29401050-2plxw\" (UID: \"32d84709-cdf9-4f62-868f-b352ef83033a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.439329 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89gj5\" (UniqueName: \"kubernetes.io/projected/32d84709-cdf9-4f62-868f-b352ef83033a-kube-api-access-89gj5\") pod \"collect-profiles-29401050-2plxw\" (UID: \"32d84709-cdf9-4f62-868f-b352ef83033a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.530752 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw" Nov 25 09:30:00 crc kubenswrapper[5043]: I1125 09:30:00.991545 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw"] Nov 25 09:30:01 crc kubenswrapper[5043]: I1125 09:30:01.583134 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw" event={"ID":"32d84709-cdf9-4f62-868f-b352ef83033a","Type":"ContainerStarted","Data":"09c5b0d5494d450ac412ca71567dcb5618824d9d0269b528fa93b48afaac1450"} Nov 25 09:30:01 crc kubenswrapper[5043]: I1125 09:30:01.583383 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw" event={"ID":"32d84709-cdf9-4f62-868f-b352ef83033a","Type":"ContainerStarted","Data":"bba23fece8b04074ff89bad00558af011e3a2e93cbf42264782abbc46d189694"} Nov 25 09:30:02 crc kubenswrapper[5043]: I1125 09:30:02.616278 5043 generic.go:334] "Generic (PLEG): container finished" podID="32d84709-cdf9-4f62-868f-b352ef83033a" containerID="09c5b0d5494d450ac412ca71567dcb5618824d9d0269b528fa93b48afaac1450" exitCode=0 Nov 25 09:30:02 crc kubenswrapper[5043]: I1125 09:30:02.616323 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw" event={"ID":"32d84709-cdf9-4f62-868f-b352ef83033a","Type":"ContainerDied","Data":"09c5b0d5494d450ac412ca71567dcb5618824d9d0269b528fa93b48afaac1450"} Nov 25 09:30:03 crc kubenswrapper[5043]: I1125 09:30:03.952306 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw" Nov 25 09:30:04 crc kubenswrapper[5043]: I1125 09:30:04.085525 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32d84709-cdf9-4f62-868f-b352ef83033a-config-volume\") pod \"32d84709-cdf9-4f62-868f-b352ef83033a\" (UID: \"32d84709-cdf9-4f62-868f-b352ef83033a\") " Nov 25 09:30:04 crc kubenswrapper[5043]: I1125 09:30:04.085652 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32d84709-cdf9-4f62-868f-b352ef83033a-secret-volume\") pod \"32d84709-cdf9-4f62-868f-b352ef83033a\" (UID: \"32d84709-cdf9-4f62-868f-b352ef83033a\") " Nov 25 09:30:04 crc kubenswrapper[5043]: I1125 09:30:04.085702 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89gj5\" (UniqueName: \"kubernetes.io/projected/32d84709-cdf9-4f62-868f-b352ef83033a-kube-api-access-89gj5\") pod \"32d84709-cdf9-4f62-868f-b352ef83033a\" (UID: \"32d84709-cdf9-4f62-868f-b352ef83033a\") " Nov 25 09:30:04 crc kubenswrapper[5043]: I1125 09:30:04.086391 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d84709-cdf9-4f62-868f-b352ef83033a-config-volume" (OuterVolumeSpecName: "config-volume") pod "32d84709-cdf9-4f62-868f-b352ef83033a" (UID: "32d84709-cdf9-4f62-868f-b352ef83033a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:30:04 crc kubenswrapper[5043]: I1125 09:30:04.087954 5043 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32d84709-cdf9-4f62-868f-b352ef83033a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:30:04 crc kubenswrapper[5043]: I1125 09:30:04.092979 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d84709-cdf9-4f62-868f-b352ef83033a-kube-api-access-89gj5" (OuterVolumeSpecName: "kube-api-access-89gj5") pod "32d84709-cdf9-4f62-868f-b352ef83033a" (UID: "32d84709-cdf9-4f62-868f-b352ef83033a"). InnerVolumeSpecName "kube-api-access-89gj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:30:04 crc kubenswrapper[5043]: I1125 09:30:04.095547 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32d84709-cdf9-4f62-868f-b352ef83033a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "32d84709-cdf9-4f62-868f-b352ef83033a" (UID: "32d84709-cdf9-4f62-868f-b352ef83033a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:30:04 crc kubenswrapper[5043]: I1125 09:30:04.190398 5043 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32d84709-cdf9-4f62-868f-b352ef83033a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:30:04 crc kubenswrapper[5043]: I1125 09:30:04.190442 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89gj5\" (UniqueName: \"kubernetes.io/projected/32d84709-cdf9-4f62-868f-b352ef83033a-kube-api-access-89gj5\") on node \"crc\" DevicePath \"\"" Nov 25 09:30:04 crc kubenswrapper[5043]: I1125 09:30:04.636595 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw" event={"ID":"32d84709-cdf9-4f62-868f-b352ef83033a","Type":"ContainerDied","Data":"bba23fece8b04074ff89bad00558af011e3a2e93cbf42264782abbc46d189694"} Nov 25 09:30:04 crc kubenswrapper[5043]: I1125 09:30:04.636667 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bba23fece8b04074ff89bad00558af011e3a2e93cbf42264782abbc46d189694" Nov 25 09:30:04 crc kubenswrapper[5043]: I1125 09:30:04.637047 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401050-2plxw" Nov 25 09:30:05 crc kubenswrapper[5043]: I1125 09:30:05.058821 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj"] Nov 25 09:30:05 crc kubenswrapper[5043]: I1125 09:30:05.068645 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401005-dq7xj"] Nov 25 09:30:06 crc kubenswrapper[5043]: I1125 09:30:06.977331 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3101bcb6-71b1-411f-bbc1-59562080339c" path="/var/lib/kubelet/pods/3101bcb6-71b1-411f-bbc1-59562080339c/volumes" Nov 25 09:30:09 crc kubenswrapper[5043]: I1125 09:30:09.962984 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:30:09 crc kubenswrapper[5043]: E1125 09:30:09.964069 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:30:20 crc kubenswrapper[5043]: I1125 09:30:20.964027 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:30:21 crc kubenswrapper[5043]: I1125 09:30:21.807252 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"c03b8cdf1887b6f6e3c18a315ebbcdbe865cf251968052a9886eadd6550871b9"} Nov 25 09:30:55 crc kubenswrapper[5043]: I1125 09:30:55.749080 5043 scope.go:117] "RemoveContainer" containerID="4accfb17f4231570bacc5e7b8893336131c68bdc349dec8722a00f6ec8b5f41d" Nov 25 09:30:57 crc kubenswrapper[5043]: I1125 09:30:57.442873 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rgb2f"] Nov 25 09:30:57 crc kubenswrapper[5043]: E1125 09:30:57.443921 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d84709-cdf9-4f62-868f-b352ef83033a" containerName="collect-profiles" Nov 25 09:30:57 crc kubenswrapper[5043]: I1125 09:30:57.443938 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d84709-cdf9-4f62-868f-b352ef83033a" containerName="collect-profiles" Nov 25 09:30:57 crc kubenswrapper[5043]: I1125 09:30:57.444203 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="32d84709-cdf9-4f62-868f-b352ef83033a" containerName="collect-profiles" Nov 25 09:30:57 crc kubenswrapper[5043]: I1125 09:30:57.446012 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgb2f" Nov 25 09:30:57 crc kubenswrapper[5043]: I1125 09:30:57.460920 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgb2f"] Nov 25 09:30:57 crc kubenswrapper[5043]: I1125 09:30:57.566080 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ea5857f-db78-4aee-85ae-be8663f0d9f1-utilities\") pod \"redhat-marketplace-rgb2f\" (UID: \"5ea5857f-db78-4aee-85ae-be8663f0d9f1\") " pod="openshift-marketplace/redhat-marketplace-rgb2f" Nov 25 09:30:57 crc kubenswrapper[5043]: I1125 09:30:57.566138 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ea5857f-db78-4aee-85ae-be8663f0d9f1-catalog-content\") pod \"redhat-marketplace-rgb2f\" (UID: \"5ea5857f-db78-4aee-85ae-be8663f0d9f1\") " pod="openshift-marketplace/redhat-marketplace-rgb2f" Nov 25 09:30:57 crc kubenswrapper[5043]: I1125 09:30:57.566250 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb2tq\" (UniqueName: \"kubernetes.io/projected/5ea5857f-db78-4aee-85ae-be8663f0d9f1-kube-api-access-wb2tq\") pod \"redhat-marketplace-rgb2f\" (UID: \"5ea5857f-db78-4aee-85ae-be8663f0d9f1\") " pod="openshift-marketplace/redhat-marketplace-rgb2f" Nov 25 09:30:57 crc kubenswrapper[5043]: I1125 09:30:57.668325 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ea5857f-db78-4aee-85ae-be8663f0d9f1-utilities\") pod \"redhat-marketplace-rgb2f\" (UID: \"5ea5857f-db78-4aee-85ae-be8663f0d9f1\") " pod="openshift-marketplace/redhat-marketplace-rgb2f" Nov 25 09:30:57 crc kubenswrapper[5043]: I1125 09:30:57.668588 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ea5857f-db78-4aee-85ae-be8663f0d9f1-catalog-content\") pod \"redhat-marketplace-rgb2f\" (UID: \"5ea5857f-db78-4aee-85ae-be8663f0d9f1\") " pod="openshift-marketplace/redhat-marketplace-rgb2f" Nov 25 09:30:57 crc kubenswrapper[5043]: I1125 09:30:57.668757 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb2tq\" (UniqueName: \"kubernetes.io/projected/5ea5857f-db78-4aee-85ae-be8663f0d9f1-kube-api-access-wb2tq\") pod \"redhat-marketplace-rgb2f\" (UID: \"5ea5857f-db78-4aee-85ae-be8663f0d9f1\") " pod="openshift-marketplace/redhat-marketplace-rgb2f" Nov 25 09:30:57 crc kubenswrapper[5043]: I1125 09:30:57.668888 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ea5857f-db78-4aee-85ae-be8663f0d9f1-utilities\") pod \"redhat-marketplace-rgb2f\" (UID: \"5ea5857f-db78-4aee-85ae-be8663f0d9f1\") " pod="openshift-marketplace/redhat-marketplace-rgb2f" Nov 25 09:30:57 crc kubenswrapper[5043]: I1125 09:30:57.669047 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ea5857f-db78-4aee-85ae-be8663f0d9f1-catalog-content\") pod \"redhat-marketplace-rgb2f\" (UID: \"5ea5857f-db78-4aee-85ae-be8663f0d9f1\") " pod="openshift-marketplace/redhat-marketplace-rgb2f" Nov 25 09:30:57 crc kubenswrapper[5043]: I1125 09:30:57.696696 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb2tq\" (UniqueName: \"kubernetes.io/projected/5ea5857f-db78-4aee-85ae-be8663f0d9f1-kube-api-access-wb2tq\") pod \"redhat-marketplace-rgb2f\" (UID: \"5ea5857f-db78-4aee-85ae-be8663f0d9f1\") " pod="openshift-marketplace/redhat-marketplace-rgb2f" Nov 25 09:30:57 crc kubenswrapper[5043]: I1125 09:30:57.772805 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgb2f" Nov 25 09:30:58 crc kubenswrapper[5043]: I1125 09:30:58.249697 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgb2f"] Nov 25 09:30:59 crc kubenswrapper[5043]: I1125 09:30:59.140480 5043 generic.go:334] "Generic (PLEG): container finished" podID="5ea5857f-db78-4aee-85ae-be8663f0d9f1" containerID="6ec4ff3894f89ac677517bd8b0ddbae2bf32fe9ddf280a59e5ea79ac10b14264" exitCode=0 Nov 25 09:30:59 crc kubenswrapper[5043]: I1125 09:30:59.140531 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgb2f" event={"ID":"5ea5857f-db78-4aee-85ae-be8663f0d9f1","Type":"ContainerDied","Data":"6ec4ff3894f89ac677517bd8b0ddbae2bf32fe9ddf280a59e5ea79ac10b14264"} Nov 25 09:30:59 crc kubenswrapper[5043]: I1125 09:30:59.141231 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgb2f" event={"ID":"5ea5857f-db78-4aee-85ae-be8663f0d9f1","Type":"ContainerStarted","Data":"8fda45c95ec615c8a6256b332de3ee83dd2df7d93895b0ccc892f3c8da41e438"} Nov 25 09:31:01 crc kubenswrapper[5043]: I1125 09:31:01.159423 5043 generic.go:334] "Generic (PLEG): container finished" podID="5ea5857f-db78-4aee-85ae-be8663f0d9f1" containerID="2bc3b103bcb2d7155a73d22ec2512fda1fab176abbbcb48659ddbc47a6d9c269" exitCode=0 Nov 25 09:31:01 crc kubenswrapper[5043]: I1125 09:31:01.159545 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgb2f" event={"ID":"5ea5857f-db78-4aee-85ae-be8663f0d9f1","Type":"ContainerDied","Data":"2bc3b103bcb2d7155a73d22ec2512fda1fab176abbbcb48659ddbc47a6d9c269"} Nov 25 09:31:02 crc kubenswrapper[5043]: I1125 09:31:02.171339 5043 generic.go:334] "Generic (PLEG): container finished" podID="d53359ac-2451-479a-bd73-bec83fc39a47" containerID="62b97edcff02a309bda37976990198a86ef51b77df6ecc2243abbc0c1d5d3324" exitCode=0 Nov 25 09:31:02 crc kubenswrapper[5043]: I1125 09:31:02.171580 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"d53359ac-2451-479a-bd73-bec83fc39a47","Type":"ContainerDied","Data":"62b97edcff02a309bda37976990198a86ef51b77df6ecc2243abbc0c1d5d3324"} Nov 25 09:31:02 crc kubenswrapper[5043]: I1125 09:31:02.174932 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgb2f" event={"ID":"5ea5857f-db78-4aee-85ae-be8663f0d9f1","Type":"ContainerStarted","Data":"799a4f721a6fba0ca568515242a7ed6161a50a0c1ccfdc5eacd06a58d52d23b8"} Nov 25 09:31:02 crc kubenswrapper[5043]: I1125 09:31:02.217927 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rgb2f" podStartSLOduration=2.439034017 podStartE2EDuration="5.217902245s" podCreationTimestamp="2025-11-25 09:30:57 +0000 UTC" firstStartedPulling="2025-11-25 09:30:59.144592634 +0000 UTC m=+8123.312788375" lastFinishedPulling="2025-11-25 09:31:01.923460882 +0000 UTC m=+8126.091656603" observedRunningTime="2025-11-25 09:31:02.216888237 +0000 UTC m=+8126.385083958" watchObservedRunningTime="2025-11-25 09:31:02.217902245 +0000 UTC m=+8126.386097966" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.677058 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.805874 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-clouds-config\") pod \"d53359ac-2451-479a-bd73-bec83fc39a47\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.805925 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-config\") pod \"d53359ac-2451-479a-bd73-bec83fc39a47\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.806087 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-ceph\") pod \"d53359ac-2451-479a-bd73-bec83fc39a47\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.806124 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-ca-certs\") pod \"d53359ac-2451-479a-bd73-bec83fc39a47\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.806152 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wv8x\" (UniqueName: \"kubernetes.io/projected/d53359ac-2451-479a-bd73-bec83fc39a47-kube-api-access-8wv8x\") pod \"d53359ac-2451-479a-bd73-bec83fc39a47\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.806191 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-public-key\") pod \"d53359ac-2451-479a-bd73-bec83fc39a47\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.806219 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-ephemeral-temporary\") pod \"d53359ac-2451-479a-bd73-bec83fc39a47\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.806275 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-kubeconfig\") pod \"d53359ac-2451-479a-bd73-bec83fc39a47\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.806364 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"d53359ac-2451-479a-bd73-bec83fc39a47\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.806391 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-ephemeral-workdir\") pod \"d53359ac-2451-479a-bd73-bec83fc39a47\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.806425 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-openstack-config-secret\") pod \"d53359ac-2451-479a-bd73-bec83fc39a47\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.806470 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-private-key\") pod \"d53359ac-2451-479a-bd73-bec83fc39a47\" (UID: \"d53359ac-2451-479a-bd73-bec83fc39a47\") " Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.807387 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "d53359ac-2451-479a-bd73-bec83fc39a47" (UID: "d53359ac-2451-479a-bd73-bec83fc39a47"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.817557 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-ceph" (OuterVolumeSpecName: "ceph") pod "d53359ac-2451-479a-bd73-bec83fc39a47" (UID: "d53359ac-2451-479a-bd73-bec83fc39a47"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.821481 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "d53359ac-2451-479a-bd73-bec83fc39a47" (UID: "d53359ac-2451-479a-bd73-bec83fc39a47"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.821589 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d53359ac-2451-479a-bd73-bec83fc39a47-kube-api-access-8wv8x" (OuterVolumeSpecName: "kube-api-access-8wv8x") pod "d53359ac-2451-479a-bd73-bec83fc39a47" (UID: "d53359ac-2451-479a-bd73-bec83fc39a47"). InnerVolumeSpecName "kube-api-access-8wv8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.863622 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-private-key" (OuterVolumeSpecName: "tobiko-private-key") pod "d53359ac-2451-479a-bd73-bec83fc39a47" (UID: "d53359ac-2451-479a-bd73-bec83fc39a47"). InnerVolumeSpecName "tobiko-private-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.866432 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d53359ac-2451-479a-bd73-bec83fc39a47" (UID: "d53359ac-2451-479a-bd73-bec83fc39a47"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.867514 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "d53359ac-2451-479a-bd73-bec83fc39a47" (UID: "d53359ac-2451-479a-bd73-bec83fc39a47"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.894285 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-config" (OuterVolumeSpecName: "tobiko-config") pod "d53359ac-2451-479a-bd73-bec83fc39a47" (UID: "d53359ac-2451-479a-bd73-bec83fc39a47"). InnerVolumeSpecName "tobiko-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.900063 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-public-key" (OuterVolumeSpecName: "tobiko-public-key") pod "d53359ac-2451-479a-bd73-bec83fc39a47" (UID: "d53359ac-2451-479a-bd73-bec83fc39a47"). InnerVolumeSpecName "tobiko-public-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.906323 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "d53359ac-2451-479a-bd73-bec83fc39a47" (UID: "d53359ac-2451-479a-bd73-bec83fc39a47"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.908817 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "d53359ac-2451-479a-bd73-bec83fc39a47" (UID: "d53359ac-2451-479a-bd73-bec83fc39a47"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.909200 5043 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.909222 5043 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.909233 5043 reconciler_common.go:293] "Volume detached for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-private-key\") on node \"crc\" DevicePath \"\"" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.909243 5043 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.909250 5043 reconciler_common.go:293] "Volume detached for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-config\") on node \"crc\" DevicePath \"\"" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.909259 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.909265 5043 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.909273 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wv8x\" (UniqueName: \"kubernetes.io/projected/d53359ac-2451-479a-bd73-bec83fc39a47-kube-api-access-8wv8x\") on node \"crc\" DevicePath \"\"" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.909283 5043 reconciler_common.go:293] "Volume detached for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/d53359ac-2451-479a-bd73-bec83fc39a47-tobiko-public-key\") on node \"crc\" DevicePath \"\"" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.909293 5043 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.909302 5043 reconciler_common.go:293] "Volume detached for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/d53359ac-2451-479a-bd73-bec83fc39a47-kubeconfig\") on node \"crc\" DevicePath \"\"" Nov 25 09:31:03 crc kubenswrapper[5043]: I1125 09:31:03.945368 5043 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 25 09:31:04 crc kubenswrapper[5043]: I1125 09:31:04.011360 5043 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 25 09:31:04 crc kubenswrapper[5043]: I1125 09:31:04.204922 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Nov 25 09:31:04 crc kubenswrapper[5043]: I1125 09:31:04.204936 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"d53359ac-2451-479a-bd73-bec83fc39a47","Type":"ContainerDied","Data":"fcfd39b9094a651407d78eb7881676a4cfba4e8b6e5268e9a6f73a2d9563d20c"} Nov 25 09:31:04 crc kubenswrapper[5043]: I1125 09:31:04.204970 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcfd39b9094a651407d78eb7881676a4cfba4e8b6e5268e9a6f73a2d9563d20c" Nov 25 09:31:05 crc kubenswrapper[5043]: I1125 09:31:05.072054 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "d53359ac-2451-479a-bd73-bec83fc39a47" (UID: "d53359ac-2451-479a-bd73-bec83fc39a47"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:31:05 crc kubenswrapper[5043]: I1125 09:31:05.135818 5043 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d53359ac-2451-479a-bd73-bec83fc39a47-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 09:31:07 crc kubenswrapper[5043]: I1125 09:31:07.773570 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rgb2f" Nov 25 09:31:07 crc kubenswrapper[5043]: I1125 09:31:07.774188 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rgb2f" Nov 25 09:31:07 crc kubenswrapper[5043]: I1125 09:31:07.835904 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rgb2f" Nov 25 09:31:08 crc kubenswrapper[5043]: I1125 09:31:08.285418 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rgb2f" Nov 25 09:31:08 crc kubenswrapper[5043]: I1125 09:31:08.345786 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgb2f"] Nov 25 09:31:10 crc kubenswrapper[5043]: I1125 09:31:10.257201 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rgb2f" podUID="5ea5857f-db78-4aee-85ae-be8663f0d9f1" containerName="registry-server" containerID="cri-o://799a4f721a6fba0ca568515242a7ed6161a50a0c1ccfdc5eacd06a58d52d23b8" gracePeriod=2 Nov 25 09:31:10 crc kubenswrapper[5043]: I1125 09:31:10.495517 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Nov 25 09:31:10 crc kubenswrapper[5043]: E1125 09:31:10.496186 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d53359ac-2451-479a-bd73-bec83fc39a47" containerName="tobiko-tests-tobiko" Nov 25 09:31:10 crc kubenswrapper[5043]: I1125 09:31:10.496211 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="d53359ac-2451-479a-bd73-bec83fc39a47" containerName="tobiko-tests-tobiko" Nov 25 09:31:10 crc kubenswrapper[5043]: I1125 09:31:10.496451 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="d53359ac-2451-479a-bd73-bec83fc39a47" containerName="tobiko-tests-tobiko" Nov 25 09:31:10 crc kubenswrapper[5043]: I1125 09:31:10.497181 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 09:31:10 crc kubenswrapper[5043]: I1125 09:31:10.508148 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Nov 25 09:31:10 crc kubenswrapper[5043]: I1125 09:31:10.543091 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnfns\" (UniqueName: \"kubernetes.io/projected/9a0d0afc-d78f-4156-8d03-e50b825c0cd0-kube-api-access-jnfns\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"9a0d0afc-d78f-4156-8d03-e50b825c0cd0\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 09:31:10 crc kubenswrapper[5043]: I1125 09:31:10.543176 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"9a0d0afc-d78f-4156-8d03-e50b825c0cd0\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 09:31:10 crc kubenswrapper[5043]: I1125 09:31:10.644782 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"9a0d0afc-d78f-4156-8d03-e50b825c0cd0\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 09:31:10 crc kubenswrapper[5043]: I1125 09:31:10.645143 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnfns\" (UniqueName: \"kubernetes.io/projected/9a0d0afc-d78f-4156-8d03-e50b825c0cd0-kube-api-access-jnfns\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"9a0d0afc-d78f-4156-8d03-e50b825c0cd0\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 09:31:10 crc kubenswrapper[5043]: I1125 09:31:10.645419 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"9a0d0afc-d78f-4156-8d03-e50b825c0cd0\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 09:31:10 crc kubenswrapper[5043]: I1125 09:31:10.666353 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnfns\" (UniqueName: \"kubernetes.io/projected/9a0d0afc-d78f-4156-8d03-e50b825c0cd0-kube-api-access-jnfns\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"9a0d0afc-d78f-4156-8d03-e50b825c0cd0\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 09:31:10 crc kubenswrapper[5043]: I1125 09:31:10.673351 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"9a0d0afc-d78f-4156-8d03-e50b825c0cd0\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 09:31:10 crc kubenswrapper[5043]: I1125 09:31:10.915689 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Nov 25 09:31:11 crc kubenswrapper[5043]: I1125 09:31:11.269951 5043 generic.go:334] "Generic (PLEG): container finished" podID="5ea5857f-db78-4aee-85ae-be8663f0d9f1" containerID="799a4f721a6fba0ca568515242a7ed6161a50a0c1ccfdc5eacd06a58d52d23b8" exitCode=0 Nov 25 09:31:11 crc kubenswrapper[5043]: I1125 09:31:11.270123 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgb2f" event={"ID":"5ea5857f-db78-4aee-85ae-be8663f0d9f1","Type":"ContainerDied","Data":"799a4f721a6fba0ca568515242a7ed6161a50a0c1ccfdc5eacd06a58d52d23b8"} Nov 25 09:31:11 crc kubenswrapper[5043]: I1125 09:31:11.372029 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Nov 25 09:31:11 crc kubenswrapper[5043]: I1125 09:31:11.548016 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgb2f" Nov 25 09:31:11 crc kubenswrapper[5043]: I1125 09:31:11.666641 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ea5857f-db78-4aee-85ae-be8663f0d9f1-catalog-content\") pod \"5ea5857f-db78-4aee-85ae-be8663f0d9f1\" (UID: \"5ea5857f-db78-4aee-85ae-be8663f0d9f1\") " Nov 25 09:31:11 crc kubenswrapper[5043]: I1125 09:31:11.666956 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb2tq\" (UniqueName: \"kubernetes.io/projected/5ea5857f-db78-4aee-85ae-be8663f0d9f1-kube-api-access-wb2tq\") pod \"5ea5857f-db78-4aee-85ae-be8663f0d9f1\" (UID: \"5ea5857f-db78-4aee-85ae-be8663f0d9f1\") " Nov 25 09:31:11 crc kubenswrapper[5043]: I1125 09:31:11.667009 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ea5857f-db78-4aee-85ae-be8663f0d9f1-utilities\") pod \"5ea5857f-db78-4aee-85ae-be8663f0d9f1\" (UID: \"5ea5857f-db78-4aee-85ae-be8663f0d9f1\") " Nov 25 09:31:11 crc kubenswrapper[5043]: I1125 09:31:11.668237 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ea5857f-db78-4aee-85ae-be8663f0d9f1-utilities" (OuterVolumeSpecName: "utilities") pod "5ea5857f-db78-4aee-85ae-be8663f0d9f1" (UID: "5ea5857f-db78-4aee-85ae-be8663f0d9f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:31:11 crc kubenswrapper[5043]: I1125 09:31:11.677904 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea5857f-db78-4aee-85ae-be8663f0d9f1-kube-api-access-wb2tq" (OuterVolumeSpecName: "kube-api-access-wb2tq") pod "5ea5857f-db78-4aee-85ae-be8663f0d9f1" (UID: "5ea5857f-db78-4aee-85ae-be8663f0d9f1"). InnerVolumeSpecName "kube-api-access-wb2tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:31:11 crc kubenswrapper[5043]: I1125 09:31:11.689315 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ea5857f-db78-4aee-85ae-be8663f0d9f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ea5857f-db78-4aee-85ae-be8663f0d9f1" (UID: "5ea5857f-db78-4aee-85ae-be8663f0d9f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:31:11 crc kubenswrapper[5043]: I1125 09:31:11.768934 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ea5857f-db78-4aee-85ae-be8663f0d9f1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:31:11 crc kubenswrapper[5043]: I1125 09:31:11.769146 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb2tq\" (UniqueName: \"kubernetes.io/projected/5ea5857f-db78-4aee-85ae-be8663f0d9f1-kube-api-access-wb2tq\") on node \"crc\" DevicePath \"\"" Nov 25 09:31:11 crc kubenswrapper[5043]: I1125 09:31:11.769206 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ea5857f-db78-4aee-85ae-be8663f0d9f1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:31:12 crc kubenswrapper[5043]: I1125 09:31:12.286439 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" event={"ID":"9a0d0afc-d78f-4156-8d03-e50b825c0cd0","Type":"ContainerStarted","Data":"012432cd7ff9555c288864b08e80b69924358d20a723265988ef28620051b8c5"} Nov 25 09:31:12 crc kubenswrapper[5043]: I1125 09:31:12.288904 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgb2f" event={"ID":"5ea5857f-db78-4aee-85ae-be8663f0d9f1","Type":"ContainerDied","Data":"8fda45c95ec615c8a6256b332de3ee83dd2df7d93895b0ccc892f3c8da41e438"} Nov 25 09:31:12 crc kubenswrapper[5043]: I1125 09:31:12.288946 5043 scope.go:117] "RemoveContainer" containerID="799a4f721a6fba0ca568515242a7ed6161a50a0c1ccfdc5eacd06a58d52d23b8" Nov 25 09:31:12 crc kubenswrapper[5043]: I1125 09:31:12.289112 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgb2f" Nov 25 09:31:12 crc kubenswrapper[5043]: I1125 09:31:12.309987 5043 scope.go:117] "RemoveContainer" containerID="2bc3b103bcb2d7155a73d22ec2512fda1fab176abbbcb48659ddbc47a6d9c269" Nov 25 09:31:12 crc kubenswrapper[5043]: I1125 09:31:12.333350 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgb2f"] Nov 25 09:31:12 crc kubenswrapper[5043]: I1125 09:31:12.347310 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgb2f"] Nov 25 09:31:12 crc kubenswrapper[5043]: I1125 09:31:12.355088 5043 scope.go:117] "RemoveContainer" containerID="6ec4ff3894f89ac677517bd8b0ddbae2bf32fe9ddf280a59e5ea79ac10b14264" Nov 25 09:31:12 crc kubenswrapper[5043]: I1125 09:31:12.974656 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ea5857f-db78-4aee-85ae-be8663f0d9f1" path="/var/lib/kubelet/pods/5ea5857f-db78-4aee-85ae-be8663f0d9f1/volumes" Nov 25 09:31:13 crc kubenswrapper[5043]: I1125 09:31:13.304672 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" event={"ID":"9a0d0afc-d78f-4156-8d03-e50b825c0cd0","Type":"ContainerStarted","Data":"705a000b07513bb047ed47d1a08c9b60d1366255d629e06b51daa48823832f4c"} Nov 25 09:31:13 crc kubenswrapper[5043]: I1125 09:31:13.334477 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" podStartSLOduration=2.086757443 podStartE2EDuration="3.334448399s" podCreationTimestamp="2025-11-25 09:31:10 +0000 UTC" firstStartedPulling="2025-11-25 09:31:11.380430421 +0000 UTC m=+8135.548626142" lastFinishedPulling="2025-11-25 09:31:12.628121367 +0000 UTC m=+8136.796317098" observedRunningTime="2025-11-25 09:31:13.316795714 +0000 UTC m=+8137.484991445" watchObservedRunningTime="2025-11-25 09:31:13.334448399 +0000 UTC m=+8137.502644140" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.450739 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ansibletest-ansibletest"] Nov 25 09:31:25 crc kubenswrapper[5043]: E1125 09:31:25.451651 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea5857f-db78-4aee-85ae-be8663f0d9f1" containerName="extract-content" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.451666 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea5857f-db78-4aee-85ae-be8663f0d9f1" containerName="extract-content" Nov 25 09:31:25 crc kubenswrapper[5043]: E1125 09:31:25.452542 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea5857f-db78-4aee-85ae-be8663f0d9f1" containerName="registry-server" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.452560 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea5857f-db78-4aee-85ae-be8663f0d9f1" containerName="registry-server" Nov 25 09:31:25 crc kubenswrapper[5043]: E1125 09:31:25.452616 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea5857f-db78-4aee-85ae-be8663f0d9f1" containerName="extract-utilities" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.452623 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea5857f-db78-4aee-85ae-be8663f0d9f1" containerName="extract-utilities" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.452845 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea5857f-db78-4aee-85ae-be8663f0d9f1" containerName="registry-server" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.453523 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.456180 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.456416 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.473020 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ansibletest-ansibletest"] Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.560601 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3a0103c7-8a95-4675-921c-1b9b4f295df8-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.560691 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.560717 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.560759 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3a0103c7-8a95-4675-921c-1b9b4f295df8-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.560869 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.560898 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.560919 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfgps\" (UniqueName: \"kubernetes.io/projected/3a0103c7-8a95-4675-921c-1b9b4f295df8-kube-api-access-mfgps\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.560951 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-ceph\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.560985 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3a0103c7-8a95-4675-921c-1b9b4f295df8-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.561004 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.663252 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.663305 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.663328 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfgps\" (UniqueName: \"kubernetes.io/projected/3a0103c7-8a95-4675-921c-1b9b4f295df8-kube-api-access-mfgps\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.663367 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-ceph\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.663407 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3a0103c7-8a95-4675-921c-1b9b4f295df8-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.663425 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.663489 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3a0103c7-8a95-4675-921c-1b9b4f295df8-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.663532 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.663554 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.663595 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3a0103c7-8a95-4675-921c-1b9b4f295df8-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.664126 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3a0103c7-8a95-4675-921c-1b9b4f295df8-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.664381 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3a0103c7-8a95-4675-921c-1b9b4f295df8-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.665150 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.665689 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3a0103c7-8a95-4675-921c-1b9b4f295df8-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.670892 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.671100 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.671317 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-ceph\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.680574 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.684837 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfgps\" (UniqueName: \"kubernetes.io/projected/3a0103c7-8a95-4675-921c-1b9b4f295df8-kube-api-access-mfgps\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.690037 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.712094 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ansibletest-ansibletest\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " pod="openstack/ansibletest-ansibletest" Nov 25 09:31:25 crc kubenswrapper[5043]: I1125 09:31:25.782980 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Nov 25 09:31:26 crc kubenswrapper[5043]: I1125 09:31:26.269137 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ansibletest-ansibletest"] Nov 25 09:31:26 crc kubenswrapper[5043]: I1125 09:31:26.474559 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"3a0103c7-8a95-4675-921c-1b9b4f295df8","Type":"ContainerStarted","Data":"5d0d92dc36a0d06f71230e84cb9fb3301464164cf242a0336cae8d01ef750a65"} Nov 25 09:31:43 crc kubenswrapper[5043]: E1125 09:31:43.189027 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified" Nov 25 09:31:43 crc kubenswrapper[5043]: E1125 09:31:43.190024 5043 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 25 09:31:43 crc kubenswrapper[5043]: container &Container{Name:ansibletest-ansibletest,Image:quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_ANSIBLE_EXTRA_VARS,Value:-e manual_run=false,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_FILE_EXTRA_VARS,Value:--- Nov 25 09:31:43 crc kubenswrapper[5043]: foo: bar Nov 25 09:31:43 crc kubenswrapper[5043]: ,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_GIT_BRANCH,Value:,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_GIT_REPO,Value:https://github.com/ansible/test-playbooks,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_INVENTORY,Value:localhost ansible_connection=local ansible_python_interpreter=python3 Nov 25 09:31:43 crc kubenswrapper[5043]: ,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_PLAYBOOK,Value:./debug.yml,ValueFrom:nil,},EnvVar{Name:POD_DEBUG,Value:false,ValueFrom:nil,},EnvVar{Name:POD_INSTALL_COLLECTIONS,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{4 0} {} 4 DecimalSI},memory: {{4294967296 0} {} 4Gi BinarySI},},Requests:ResourceList{cpu: {{2 0} {} 2 DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/ansible,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/AnsibleTests/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/ansible/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/var/lib/ansible/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ca-bundle.trust.crt,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:workload-ssh-secret,ReadOnly:true,MountPath:/var/lib/ansible/test_keypair.key,SubPath:ssh-privatekey,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:compute-ssh-secret,ReadOnly:true,MountPath:/var/lib/ansible/.ssh/compute_id,SubPath:ssh-privatekey,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfgps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*227,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*227,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ansibletest-ansibletest_openstack(3a0103c7-8a95-4675-921c-1b9b4f295df8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Nov 25 09:31:43 crc kubenswrapper[5043]: > logger="UnhandledError" Nov 25 09:31:43 crc kubenswrapper[5043]: E1125 09:31:43.191264 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ansibletest-ansibletest\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ansibletest-ansibletest" podUID="3a0103c7-8a95-4675-921c-1b9b4f295df8" Nov 25 09:31:43 crc kubenswrapper[5043]: E1125 09:31:43.671377 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ansibletest-ansibletest\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified\\\"\"" pod="openstack/ansibletest-ansibletest" podUID="3a0103c7-8a95-4675-921c-1b9b4f295df8" Nov 25 09:31:53 crc kubenswrapper[5043]: I1125 09:31:53.964538 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 09:31:56 crc kubenswrapper[5043]: I1125 09:31:56.788679 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"3a0103c7-8a95-4675-921c-1b9b4f295df8","Type":"ContainerStarted","Data":"4babed1c24958d86ef0801e50bd94834d3454b7cbe555a6df20847316c573a1c"} Nov 25 09:31:56 crc kubenswrapper[5043]: I1125 09:31:56.812792 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ansibletest-ansibletest" podStartSLOduration=3.7318862360000002 podStartE2EDuration="32.812773164s" podCreationTimestamp="2025-11-25 09:31:24 +0000 UTC" firstStartedPulling="2025-11-25 09:31:26.265638274 +0000 UTC m=+8150.433834015" lastFinishedPulling="2025-11-25 09:31:55.346525222 +0000 UTC m=+8179.514720943" observedRunningTime="2025-11-25 09:31:56.803829723 +0000 UTC m=+8180.972025454" watchObservedRunningTime="2025-11-25 09:31:56.812773164 +0000 UTC m=+8180.980968885" Nov 25 09:31:58 crc kubenswrapper[5043]: I1125 09:31:58.805971 5043 generic.go:334] "Generic (PLEG): container finished" podID="3a0103c7-8a95-4675-921c-1b9b4f295df8" containerID="4babed1c24958d86ef0801e50bd94834d3454b7cbe555a6df20847316c573a1c" exitCode=0 Nov 25 09:31:58 crc kubenswrapper[5043]: I1125 09:31:58.806058 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"3a0103c7-8a95-4675-921c-1b9b4f295df8","Type":"ContainerDied","Data":"4babed1c24958d86ef0801e50bd94834d3454b7cbe555a6df20847316c573a1c"} Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.179851 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.319146 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfgps\" (UniqueName: \"kubernetes.io/projected/3a0103c7-8a95-4675-921c-1b9b4f295df8-kube-api-access-mfgps\") pod \"3a0103c7-8a95-4675-921c-1b9b4f295df8\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.319749 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-ca-certs\") pod \"3a0103c7-8a95-4675-921c-1b9b4f295df8\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.319791 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3a0103c7-8a95-4675-921c-1b9b4f295df8-openstack-config\") pod \"3a0103c7-8a95-4675-921c-1b9b4f295df8\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.319811 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-compute-ssh-secret\") pod \"3a0103c7-8a95-4675-921c-1b9b4f295df8\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.319851 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"3a0103c7-8a95-4675-921c-1b9b4f295df8\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.319882 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-openstack-config-secret\") pod \"3a0103c7-8a95-4675-921c-1b9b4f295df8\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.319909 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3a0103c7-8a95-4675-921c-1b9b4f295df8-test-operator-ephemeral-workdir\") pod \"3a0103c7-8a95-4675-921c-1b9b4f295df8\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.319940 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-ceph\") pod \"3a0103c7-8a95-4675-921c-1b9b4f295df8\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.319990 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-workload-ssh-secret\") pod \"3a0103c7-8a95-4675-921c-1b9b4f295df8\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.320031 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3a0103c7-8a95-4675-921c-1b9b4f295df8-test-operator-ephemeral-temporary\") pod \"3a0103c7-8a95-4675-921c-1b9b4f295df8\" (UID: \"3a0103c7-8a95-4675-921c-1b9b4f295df8\") " Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.320780 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a0103c7-8a95-4675-921c-1b9b4f295df8-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "3a0103c7-8a95-4675-921c-1b9b4f295df8" (UID: "3a0103c7-8a95-4675-921c-1b9b4f295df8"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.321887 5043 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3a0103c7-8a95-4675-921c-1b9b4f295df8-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.326226 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a0103c7-8a95-4675-921c-1b9b4f295df8-kube-api-access-mfgps" (OuterVolumeSpecName: "kube-api-access-mfgps") pod "3a0103c7-8a95-4675-921c-1b9b4f295df8" (UID: "3a0103c7-8a95-4675-921c-1b9b4f295df8"). InnerVolumeSpecName "kube-api-access-mfgps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.331898 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "3a0103c7-8a95-4675-921c-1b9b4f295df8" (UID: "3a0103c7-8a95-4675-921c-1b9b4f295df8"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.333817 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a0103c7-8a95-4675-921c-1b9b4f295df8-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "3a0103c7-8a95-4675-921c-1b9b4f295df8" (UID: "3a0103c7-8a95-4675-921c-1b9b4f295df8"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.337723 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-ceph" (OuterVolumeSpecName: "ceph") pod "3a0103c7-8a95-4675-921c-1b9b4f295df8" (UID: "3a0103c7-8a95-4675-921c-1b9b4f295df8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.350497 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-workload-ssh-secret" (OuterVolumeSpecName: "workload-ssh-secret") pod "3a0103c7-8a95-4675-921c-1b9b4f295df8" (UID: "3a0103c7-8a95-4675-921c-1b9b4f295df8"). InnerVolumeSpecName "workload-ssh-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.352618 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-compute-ssh-secret" (OuterVolumeSpecName: "compute-ssh-secret") pod "3a0103c7-8a95-4675-921c-1b9b4f295df8" (UID: "3a0103c7-8a95-4675-921c-1b9b4f295df8"). InnerVolumeSpecName "compute-ssh-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.369816 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "3a0103c7-8a95-4675-921c-1b9b4f295df8" (UID: "3a0103c7-8a95-4675-921c-1b9b4f295df8"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.373996 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "3a0103c7-8a95-4675-921c-1b9b4f295df8" (UID: "3a0103c7-8a95-4675-921c-1b9b4f295df8"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.378872 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a0103c7-8a95-4675-921c-1b9b4f295df8-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "3a0103c7-8a95-4675-921c-1b9b4f295df8" (UID: "3a0103c7-8a95-4675-921c-1b9b4f295df8"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.424258 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfgps\" (UniqueName: \"kubernetes.io/projected/3a0103c7-8a95-4675-921c-1b9b4f295df8-kube-api-access-mfgps\") on node \"crc\" DevicePath \"\"" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.424297 5043 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.424312 5043 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3a0103c7-8a95-4675-921c-1b9b4f295df8-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.424324 5043 reconciler_common.go:293] "Volume detached for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-compute-ssh-secret\") on node \"crc\" DevicePath \"\"" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.424362 5043 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.424376 5043 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.424388 5043 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3a0103c7-8a95-4675-921c-1b9b4f295df8-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.424401 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.424413 5043 reconciler_common.go:293] "Volume detached for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/3a0103c7-8a95-4675-921c-1b9b4f295df8-workload-ssh-secret\") on node \"crc\" DevicePath \"\"" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.447436 5043 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.525754 5043 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.842833 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"3a0103c7-8a95-4675-921c-1b9b4f295df8","Type":"ContainerDied","Data":"5d0d92dc36a0d06f71230e84cb9fb3301464164cf242a0336cae8d01ef750a65"} Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.843289 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d0d92dc36a0d06f71230e84cb9fb3301464164cf242a0336cae8d01ef750a65" Nov 25 09:32:00 crc kubenswrapper[5043]: I1125 09:32:00.842942 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Nov 25 09:32:10 crc kubenswrapper[5043]: I1125 09:32:10.530535 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Nov 25 09:32:10 crc kubenswrapper[5043]: E1125 09:32:10.531827 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a0103c7-8a95-4675-921c-1b9b4f295df8" containerName="ansibletest-ansibletest" Nov 25 09:32:10 crc kubenswrapper[5043]: I1125 09:32:10.531845 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a0103c7-8a95-4675-921c-1b9b4f295df8" containerName="ansibletest-ansibletest" Nov 25 09:32:10 crc kubenswrapper[5043]: I1125 09:32:10.532106 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a0103c7-8a95-4675-921c-1b9b4f295df8" containerName="ansibletest-ansibletest" Nov 25 09:32:10 crc kubenswrapper[5043]: I1125 09:32:10.533018 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 09:32:10 crc kubenswrapper[5043]: I1125 09:32:10.542182 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Nov 25 09:32:10 crc kubenswrapper[5043]: I1125 09:32:10.582632 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5l6q\" (UniqueName: \"kubernetes.io/projected/028859ac-e6df-4f39-bd2c-8b884c7c378e-kube-api-access-x5l6q\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"028859ac-e6df-4f39-bd2c-8b884c7c378e\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 09:32:10 crc kubenswrapper[5043]: I1125 09:32:10.582704 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"028859ac-e6df-4f39-bd2c-8b884c7c378e\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 09:32:10 crc kubenswrapper[5043]: I1125 09:32:10.684785 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5l6q\" (UniqueName: \"kubernetes.io/projected/028859ac-e6df-4f39-bd2c-8b884c7c378e-kube-api-access-x5l6q\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"028859ac-e6df-4f39-bd2c-8b884c7c378e\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 09:32:10 crc kubenswrapper[5043]: I1125 09:32:10.684870 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"028859ac-e6df-4f39-bd2c-8b884c7c378e\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 09:32:10 crc kubenswrapper[5043]: I1125 09:32:10.685346 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"028859ac-e6df-4f39-bd2c-8b884c7c378e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 09:32:10 crc kubenswrapper[5043]: I1125 09:32:10.722672 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5l6q\" (UniqueName: \"kubernetes.io/projected/028859ac-e6df-4f39-bd2c-8b884c7c378e-kube-api-access-x5l6q\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"028859ac-e6df-4f39-bd2c-8b884c7c378e\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 09:32:10 crc kubenswrapper[5043]: I1125 09:32:10.737210 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"028859ac-e6df-4f39-bd2c-8b884c7c378e\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 09:32:10 crc kubenswrapper[5043]: I1125 09:32:10.868584 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Nov 25 09:32:11 crc kubenswrapper[5043]: I1125 09:32:11.381315 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Nov 25 09:32:11 crc kubenswrapper[5043]: W1125 09:32:11.381518 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod028859ac_e6df_4f39_bd2c_8b884c7c378e.slice/crio-6c76314a7ad7012f43c2aa773fb4f327abf19acde9e90293fe5e3c65e6505490 WatchSource:0}: Error finding container 6c76314a7ad7012f43c2aa773fb4f327abf19acde9e90293fe5e3c65e6505490: Status 404 returned error can't find the container with id 6c76314a7ad7012f43c2aa773fb4f327abf19acde9e90293fe5e3c65e6505490 Nov 25 09:32:11 crc kubenswrapper[5043]: I1125 09:32:11.962396 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" event={"ID":"028859ac-e6df-4f39-bd2c-8b884c7c378e","Type":"ContainerStarted","Data":"6c76314a7ad7012f43c2aa773fb4f327abf19acde9e90293fe5e3c65e6505490"} Nov 25 09:32:14 crc kubenswrapper[5043]: I1125 09:32:14.991033 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" event={"ID":"028859ac-e6df-4f39-bd2c-8b884c7c378e","Type":"ContainerStarted","Data":"91e9b910f46092c7cd3c0806e53baafecdfdc768e9ab5147cab0e27d23c1d64e"} Nov 25 09:32:15 crc kubenswrapper[5043]: I1125 09:32:15.014587 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" podStartSLOduration=2.245034524 podStartE2EDuration="5.014566282s" podCreationTimestamp="2025-11-25 09:32:10 +0000 UTC" firstStartedPulling="2025-11-25 09:32:11.384541192 +0000 UTC m=+8195.552736913" lastFinishedPulling="2025-11-25 09:32:14.15407295 +0000 UTC m=+8198.322268671" observedRunningTime="2025-11-25 09:32:15.006043182 +0000 UTC m=+8199.174238923" watchObservedRunningTime="2025-11-25 09:32:15.014566282 +0000 UTC m=+8199.182762003" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.023955 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizontest-tests-horizontest"] Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.027130 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.029371 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizontest-tests-horizontesthorizontest-config" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.030770 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"test-operator-clouds-config" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.061906 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizontest-tests-horizontest"] Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.121843 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.121977 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.122224 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.122425 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g4qb\" (UniqueName: \"kubernetes.io/projected/452efbe7-7e6a-4e2a-8a22-1dfa69176628-kube-api-access-7g4qb\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.122498 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.122564 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.122731 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.122767 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.225045 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g4qb\" (UniqueName: \"kubernetes.io/projected/452efbe7-7e6a-4e2a-8a22-1dfa69176628-kube-api-access-7g4qb\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.225158 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.225207 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.225332 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.225371 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.226439 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.226556 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.226696 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.227090 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.228283 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.228307 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.231174 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.232304 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.245375 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.249441 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.252225 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g4qb\" (UniqueName: \"kubernetes.io/projected/452efbe7-7e6a-4e2a-8a22-1dfa69176628-kube-api-access-7g4qb\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.281574 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"horizontest-tests-horizontest\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.364621 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Nov 25 09:32:25 crc kubenswrapper[5043]: I1125 09:32:25.798955 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizontest-tests-horizontest"] Nov 25 09:32:25 crc kubenswrapper[5043]: W1125 09:32:25.809867 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod452efbe7_7e6a_4e2a_8a22_1dfa69176628.slice/crio-1e7c75b2c25b23402895be94c4c009dc6f4ad4fd15c95012aeb9e2c676571173 WatchSource:0}: Error finding container 1e7c75b2c25b23402895be94c4c009dc6f4ad4fd15c95012aeb9e2c676571173: Status 404 returned error can't find the container with id 1e7c75b2c25b23402895be94c4c009dc6f4ad4fd15c95012aeb9e2c676571173 Nov 25 09:32:26 crc kubenswrapper[5043]: I1125 09:32:26.328876 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"452efbe7-7e6a-4e2a-8a22-1dfa69176628","Type":"ContainerStarted","Data":"1e7c75b2c25b23402895be94c4c009dc6f4ad4fd15c95012aeb9e2c676571173"} Nov 25 09:32:47 crc kubenswrapper[5043]: I1125 09:32:47.278029 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:32:47 crc kubenswrapper[5043]: I1125 09:32:47.278583 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:32:55 crc kubenswrapper[5043]: E1125 09:32:55.502501 5043 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizontest:current-podified" Nov 25 09:32:55 crc kubenswrapper[5043]: E1125 09:32:55.503262 5043 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizontest-tests-horizontest,Image:quay.io/podified-antelope-centos9/openstack-horizontest:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADMIN_PASSWORD,Value:12345678,ValueFrom:nil,},EnvVar{Name:ADMIN_USERNAME,Value:admin,ValueFrom:nil,},EnvVar{Name:AUTH_URL,Value:https://keystone-public-openstack.apps-crc.testing,ValueFrom:nil,},EnvVar{Name:DASHBOARD_URL,Value:https://horizon-openstack.apps-crc.testing/,ValueFrom:nil,},EnvVar{Name:EXTRA_FLAG,Value:not pagination and test_users.py,ValueFrom:nil,},EnvVar{Name:FLAVOR_NAME,Value:m1.tiny,ValueFrom:nil,},EnvVar{Name:HORIZONTEST_DEBUG_MODE,Value:false,ValueFrom:nil,},EnvVar{Name:HORIZON_KEYS_FOLDER,Value:/etc/test_operator,ValueFrom:nil,},EnvVar{Name:HORIZON_LOGS_DIR_NAME,Value:horizon,ValueFrom:nil,},EnvVar{Name:HORIZON_REPO_BRANCH,Value:master,ValueFrom:nil,},EnvVar{Name:IMAGE_FILE,Value:/var/lib/horizontest/cirros-0.6.2-x86_64-disk.img,ValueFrom:nil,},EnvVar{Name:IMAGE_FILE_NAME,Value:cirros-0.6.2-x86_64-disk,ValueFrom:nil,},EnvVar{Name:IMAGE_URL,Value:http://download.cirros-cloud.net/0.6.2/cirros-0.6.2-x86_64-disk.img,ValueFrom:nil,},EnvVar{Name:PASSWORD,Value:horizontest,ValueFrom:nil,},EnvVar{Name:PROJECT_NAME,Value:horizontest,ValueFrom:nil,},EnvVar{Name:PROJECT_NAME_XPATH,Value://*[@class=\"context-project\"]//ancestor::ul,ValueFrom:nil,},EnvVar{Name:REPO_URL,Value:https://review.opendev.org/openstack/horizon,ValueFrom:nil,},EnvVar{Name:USER_NAME,Value:horizontest,ValueFrom:nil,},EnvVar{Name:USE_EXTERNAL_FILES,Value:True,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{2 0} {} 2 DecimalSI},memory: {{4294967296 0} {} 4Gi BinarySI},},Requests:ResourceList{cpu: {{1 0} {} 1 DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/horizontest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/horizontest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/var/lib/horizontest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ca-bundle.trust.crt,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7g4qb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42455,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42455,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizontest-tests-horizontest_openstack(452efbe7-7e6a-4e2a-8a22-1dfa69176628): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 09:32:55 crc kubenswrapper[5043]: E1125 09:32:55.504812 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizontest-tests-horizontest\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/horizontest-tests-horizontest" podUID="452efbe7-7e6a-4e2a-8a22-1dfa69176628" Nov 25 09:32:55 crc kubenswrapper[5043]: E1125 09:32:55.639852 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizontest-tests-horizontest\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizontest:current-podified\\\"\"" pod="openstack/horizontest-tests-horizontest" podUID="452efbe7-7e6a-4e2a-8a22-1dfa69176628" Nov 25 09:33:07 crc kubenswrapper[5043]: I1125 09:33:07.774000 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"452efbe7-7e6a-4e2a-8a22-1dfa69176628","Type":"ContainerStarted","Data":"998f90b5c04877d627c43718ff60baad5a07af5fe8927912cd343a3614200536"} Nov 25 09:33:07 crc kubenswrapper[5043]: I1125 09:33:07.800495 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizontest-tests-horizontest" podStartSLOduration=4.095307275 podStartE2EDuration="44.800474444s" podCreationTimestamp="2025-11-25 09:32:23 +0000 UTC" firstStartedPulling="2025-11-25 09:32:25.813365756 +0000 UTC m=+8209.981561477" lastFinishedPulling="2025-11-25 09:33:06.518532925 +0000 UTC m=+8250.686728646" observedRunningTime="2025-11-25 09:33:07.795520341 +0000 UTC m=+8251.963716152" watchObservedRunningTime="2025-11-25 09:33:07.800474444 +0000 UTC m=+8251.968670165" Nov 25 09:33:17 crc kubenswrapper[5043]: I1125 09:33:17.276828 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:33:17 crc kubenswrapper[5043]: I1125 09:33:17.277412 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:33:47 crc kubenswrapper[5043]: I1125 09:33:47.280877 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:33:47 crc kubenswrapper[5043]: I1125 09:33:47.281577 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:33:47 crc kubenswrapper[5043]: I1125 09:33:47.281724 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 09:33:47 crc kubenswrapper[5043]: I1125 09:33:47.282685 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c03b8cdf1887b6f6e3c18a315ebbcdbe865cf251968052a9886eadd6550871b9"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:33:47 crc kubenswrapper[5043]: I1125 09:33:47.282801 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://c03b8cdf1887b6f6e3c18a315ebbcdbe865cf251968052a9886eadd6550871b9" gracePeriod=600 Nov 25 09:33:48 crc kubenswrapper[5043]: I1125 09:33:48.206112 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="c03b8cdf1887b6f6e3c18a315ebbcdbe865cf251968052a9886eadd6550871b9" exitCode=0 Nov 25 09:33:48 crc kubenswrapper[5043]: I1125 09:33:48.206462 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"c03b8cdf1887b6f6e3c18a315ebbcdbe865cf251968052a9886eadd6550871b9"} Nov 25 09:33:48 crc kubenswrapper[5043]: I1125 09:33:48.206493 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880"} Nov 25 09:33:48 crc kubenswrapper[5043]: I1125 09:33:48.206508 5043 scope.go:117] "RemoveContainer" containerID="a83c0560ffa800ea3d08e29eabee4fc6511dcd0faadc0315e5e35c1c60137ffd" Nov 25 09:34:14 crc kubenswrapper[5043]: I1125 09:34:14.891834 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rbwh6"] Nov 25 09:34:14 crc kubenswrapper[5043]: I1125 09:34:14.894500 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rbwh6" Nov 25 09:34:14 crc kubenswrapper[5043]: I1125 09:34:14.909080 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rbwh6"] Nov 25 09:34:14 crc kubenswrapper[5043]: I1125 09:34:14.938106 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwc2t\" (UniqueName: \"kubernetes.io/projected/9667fe98-9391-43f6-9d26-2131532a0570-kube-api-access-cwc2t\") pod \"certified-operators-rbwh6\" (UID: \"9667fe98-9391-43f6-9d26-2131532a0570\") " pod="openshift-marketplace/certified-operators-rbwh6" Nov 25 09:34:14 crc kubenswrapper[5043]: I1125 09:34:14.938304 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9667fe98-9391-43f6-9d26-2131532a0570-utilities\") pod \"certified-operators-rbwh6\" (UID: \"9667fe98-9391-43f6-9d26-2131532a0570\") " pod="openshift-marketplace/certified-operators-rbwh6" Nov 25 09:34:14 crc kubenswrapper[5043]: I1125 09:34:14.938345 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9667fe98-9391-43f6-9d26-2131532a0570-catalog-content\") pod \"certified-operators-rbwh6\" (UID: \"9667fe98-9391-43f6-9d26-2131532a0570\") " pod="openshift-marketplace/certified-operators-rbwh6" Nov 25 09:34:15 crc kubenswrapper[5043]: I1125 09:34:15.039963 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9667fe98-9391-43f6-9d26-2131532a0570-utilities\") pod \"certified-operators-rbwh6\" (UID: \"9667fe98-9391-43f6-9d26-2131532a0570\") " pod="openshift-marketplace/certified-operators-rbwh6" Nov 25 09:34:15 crc kubenswrapper[5043]: I1125 09:34:15.040071 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9667fe98-9391-43f6-9d26-2131532a0570-catalog-content\") pod \"certified-operators-rbwh6\" (UID: \"9667fe98-9391-43f6-9d26-2131532a0570\") " pod="openshift-marketplace/certified-operators-rbwh6" Nov 25 09:34:15 crc kubenswrapper[5043]: I1125 09:34:15.040132 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwc2t\" (UniqueName: \"kubernetes.io/projected/9667fe98-9391-43f6-9d26-2131532a0570-kube-api-access-cwc2t\") pod \"certified-operators-rbwh6\" (UID: \"9667fe98-9391-43f6-9d26-2131532a0570\") " pod="openshift-marketplace/certified-operators-rbwh6" Nov 25 09:34:15 crc kubenswrapper[5043]: I1125 09:34:15.041671 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9667fe98-9391-43f6-9d26-2131532a0570-utilities\") pod \"certified-operators-rbwh6\" (UID: \"9667fe98-9391-43f6-9d26-2131532a0570\") " pod="openshift-marketplace/certified-operators-rbwh6" Nov 25 09:34:15 crc kubenswrapper[5043]: I1125 09:34:15.041826 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9667fe98-9391-43f6-9d26-2131532a0570-catalog-content\") pod \"certified-operators-rbwh6\" (UID: \"9667fe98-9391-43f6-9d26-2131532a0570\") " pod="openshift-marketplace/certified-operators-rbwh6" Nov 25 09:34:15 crc kubenswrapper[5043]: I1125 09:34:15.065053 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwc2t\" (UniqueName: \"kubernetes.io/projected/9667fe98-9391-43f6-9d26-2131532a0570-kube-api-access-cwc2t\") pod \"certified-operators-rbwh6\" (UID: \"9667fe98-9391-43f6-9d26-2131532a0570\") " pod="openshift-marketplace/certified-operators-rbwh6" Nov 25 09:34:15 crc kubenswrapper[5043]: I1125 09:34:15.255123 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rbwh6" Nov 25 09:34:15 crc kubenswrapper[5043]: I1125 09:34:15.943871 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rbwh6"] Nov 25 09:34:16 crc kubenswrapper[5043]: I1125 09:34:16.270976 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-svvj4"] Nov 25 09:34:16 crc kubenswrapper[5043]: I1125 09:34:16.273290 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-svvj4" Nov 25 09:34:16 crc kubenswrapper[5043]: I1125 09:34:16.303794 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-svvj4"] Nov 25 09:34:16 crc kubenswrapper[5043]: I1125 09:34:16.368991 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztwqj\" (UniqueName: \"kubernetes.io/projected/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-kube-api-access-ztwqj\") pod \"community-operators-svvj4\" (UID: \"cbf8ac91-46f0-4df9-9800-baa08ab7e14b\") " pod="openshift-marketplace/community-operators-svvj4" Nov 25 09:34:16 crc kubenswrapper[5043]: I1125 09:34:16.369059 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-utilities\") pod \"community-operators-svvj4\" (UID: \"cbf8ac91-46f0-4df9-9800-baa08ab7e14b\") " pod="openshift-marketplace/community-operators-svvj4" Nov 25 09:34:16 crc kubenswrapper[5043]: I1125 09:34:16.369260 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-catalog-content\") pod \"community-operators-svvj4\" (UID: \"cbf8ac91-46f0-4df9-9800-baa08ab7e14b\") " pod="openshift-marketplace/community-operators-svvj4" Nov 25 09:34:16 crc kubenswrapper[5043]: I1125 09:34:16.471871 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-catalog-content\") pod \"community-operators-svvj4\" (UID: \"cbf8ac91-46f0-4df9-9800-baa08ab7e14b\") " pod="openshift-marketplace/community-operators-svvj4" Nov 25 09:34:16 crc kubenswrapper[5043]: I1125 09:34:16.471978 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztwqj\" (UniqueName: \"kubernetes.io/projected/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-kube-api-access-ztwqj\") pod \"community-operators-svvj4\" (UID: \"cbf8ac91-46f0-4df9-9800-baa08ab7e14b\") " pod="openshift-marketplace/community-operators-svvj4" Nov 25 09:34:16 crc kubenswrapper[5043]: I1125 09:34:16.472027 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-utilities\") pod \"community-operators-svvj4\" (UID: \"cbf8ac91-46f0-4df9-9800-baa08ab7e14b\") " pod="openshift-marketplace/community-operators-svvj4" Nov 25 09:34:16 crc kubenswrapper[5043]: I1125 09:34:16.472467 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-catalog-content\") pod \"community-operators-svvj4\" (UID: \"cbf8ac91-46f0-4df9-9800-baa08ab7e14b\") " pod="openshift-marketplace/community-operators-svvj4" Nov 25 09:34:16 crc kubenswrapper[5043]: I1125 09:34:16.472502 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-utilities\") pod \"community-operators-svvj4\" (UID: \"cbf8ac91-46f0-4df9-9800-baa08ab7e14b\") " pod="openshift-marketplace/community-operators-svvj4" Nov 25 09:34:16 crc kubenswrapper[5043]: I1125 09:34:16.492674 5043 generic.go:334] "Generic (PLEG): container finished" podID="9667fe98-9391-43f6-9d26-2131532a0570" containerID="a7c0687774e454d937e6db09c05c1ccad0085c4e356075196501a5f045f4cf9a" exitCode=0 Nov 25 09:34:16 crc kubenswrapper[5043]: I1125 09:34:16.493217 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztwqj\" (UniqueName: \"kubernetes.io/projected/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-kube-api-access-ztwqj\") pod \"community-operators-svvj4\" (UID: \"cbf8ac91-46f0-4df9-9800-baa08ab7e14b\") " pod="openshift-marketplace/community-operators-svvj4" Nov 25 09:34:16 crc kubenswrapper[5043]: I1125 09:34:16.493341 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbwh6" event={"ID":"9667fe98-9391-43f6-9d26-2131532a0570","Type":"ContainerDied","Data":"a7c0687774e454d937e6db09c05c1ccad0085c4e356075196501a5f045f4cf9a"} Nov 25 09:34:16 crc kubenswrapper[5043]: I1125 09:34:16.493392 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbwh6" event={"ID":"9667fe98-9391-43f6-9d26-2131532a0570","Type":"ContainerStarted","Data":"40443571e5c651ebf209574de5ed5039a93511631e64f3432b22ce2d36c15b13"} Nov 25 09:34:16 crc kubenswrapper[5043]: I1125 09:34:16.596055 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-svvj4" Nov 25 09:34:17 crc kubenswrapper[5043]: W1125 09:34:17.167817 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf8ac91_46f0_4df9_9800_baa08ab7e14b.slice/crio-9d95b77611a37b0a24cc0cca4e12482808dc3d70918de75528c96e8d2182e9b7 WatchSource:0}: Error finding container 9d95b77611a37b0a24cc0cca4e12482808dc3d70918de75528c96e8d2182e9b7: Status 404 returned error can't find the container with id 9d95b77611a37b0a24cc0cca4e12482808dc3d70918de75528c96e8d2182e9b7 Nov 25 09:34:17 crc kubenswrapper[5043]: I1125 09:34:17.174072 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-svvj4"] Nov 25 09:34:17 crc kubenswrapper[5043]: I1125 09:34:17.506540 5043 generic.go:334] "Generic (PLEG): container finished" podID="cbf8ac91-46f0-4df9-9800-baa08ab7e14b" containerID="818f7cf818a5ea926c1c78d049474646b7925bb64dd381081c65c3d107e6c402" exitCode=0 Nov 25 09:34:17 crc kubenswrapper[5043]: I1125 09:34:17.506641 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-svvj4" event={"ID":"cbf8ac91-46f0-4df9-9800-baa08ab7e14b","Type":"ContainerDied","Data":"818f7cf818a5ea926c1c78d049474646b7925bb64dd381081c65c3d107e6c402"} Nov 25 09:34:17 crc kubenswrapper[5043]: I1125 09:34:17.507014 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-svvj4" event={"ID":"cbf8ac91-46f0-4df9-9800-baa08ab7e14b","Type":"ContainerStarted","Data":"9d95b77611a37b0a24cc0cca4e12482808dc3d70918de75528c96e8d2182e9b7"} Nov 25 09:34:18 crc kubenswrapper[5043]: I1125 09:34:18.538780 5043 generic.go:334] "Generic (PLEG): container finished" podID="9667fe98-9391-43f6-9d26-2131532a0570" containerID="d2770849e298d46a1d8600f047b002d6db5618951cb9342dd55251bdbd2e0d13" exitCode=0 Nov 25 09:34:18 crc kubenswrapper[5043]: I1125 09:34:18.538930 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbwh6" event={"ID":"9667fe98-9391-43f6-9d26-2131532a0570","Type":"ContainerDied","Data":"d2770849e298d46a1d8600f047b002d6db5618951cb9342dd55251bdbd2e0d13"} Nov 25 09:34:19 crc kubenswrapper[5043]: I1125 09:34:19.553473 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbwh6" event={"ID":"9667fe98-9391-43f6-9d26-2131532a0570","Type":"ContainerStarted","Data":"bfe0af472fc9f2ae95c5186a4958cbf1d383c429faeec8e0325d6244d4fe28bb"} Nov 25 09:34:19 crc kubenswrapper[5043]: I1125 09:34:19.556530 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-svvj4" event={"ID":"cbf8ac91-46f0-4df9-9800-baa08ab7e14b","Type":"ContainerStarted","Data":"ed2a54de8301b2fb1665e611988d3dde509f396102ee988bbcbd150b9c4bd276"} Nov 25 09:34:19 crc kubenswrapper[5043]: I1125 09:34:19.598556 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rbwh6" podStartSLOduration=2.969027725 podStartE2EDuration="5.598535426s" podCreationTimestamp="2025-11-25 09:34:14 +0000 UTC" firstStartedPulling="2025-11-25 09:34:16.495713638 +0000 UTC m=+8320.663909359" lastFinishedPulling="2025-11-25 09:34:19.125221319 +0000 UTC m=+8323.293417060" observedRunningTime="2025-11-25 09:34:19.577502319 +0000 UTC m=+8323.745698040" watchObservedRunningTime="2025-11-25 09:34:19.598535426 +0000 UTC m=+8323.766731157" Nov 25 09:34:23 crc kubenswrapper[5043]: I1125 09:34:23.590689 5043 generic.go:334] "Generic (PLEG): container finished" podID="cbf8ac91-46f0-4df9-9800-baa08ab7e14b" containerID="ed2a54de8301b2fb1665e611988d3dde509f396102ee988bbcbd150b9c4bd276" exitCode=0 Nov 25 09:34:23 crc kubenswrapper[5043]: I1125 09:34:23.590768 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-svvj4" event={"ID":"cbf8ac91-46f0-4df9-9800-baa08ab7e14b","Type":"ContainerDied","Data":"ed2a54de8301b2fb1665e611988d3dde509f396102ee988bbcbd150b9c4bd276"} Nov 25 09:34:24 crc kubenswrapper[5043]: I1125 09:34:24.607139 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-svvj4" event={"ID":"cbf8ac91-46f0-4df9-9800-baa08ab7e14b","Type":"ContainerStarted","Data":"edd3d536796fc40a21fb99956c99744dd840b02b081aeeb1509cda047b226d41"} Nov 25 09:34:24 crc kubenswrapper[5043]: I1125 09:34:24.633205 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-svvj4" podStartSLOduration=2.169236521 podStartE2EDuration="8.633185634s" podCreationTimestamp="2025-11-25 09:34:16 +0000 UTC" firstStartedPulling="2025-11-25 09:34:17.522130926 +0000 UTC m=+8321.690326677" lastFinishedPulling="2025-11-25 09:34:23.986080069 +0000 UTC m=+8328.154275790" observedRunningTime="2025-11-25 09:34:24.626014281 +0000 UTC m=+8328.794210002" watchObservedRunningTime="2025-11-25 09:34:24.633185634 +0000 UTC m=+8328.801381355" Nov 25 09:34:25 crc kubenswrapper[5043]: I1125 09:34:25.255553 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rbwh6" Nov 25 09:34:25 crc kubenswrapper[5043]: I1125 09:34:25.255635 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rbwh6" Nov 25 09:34:25 crc kubenswrapper[5043]: I1125 09:34:25.309687 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rbwh6" Nov 25 09:34:25 crc kubenswrapper[5043]: I1125 09:34:25.677107 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rbwh6" Nov 25 09:34:26 crc kubenswrapper[5043]: I1125 09:34:26.597038 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-svvj4" Nov 25 09:34:26 crc kubenswrapper[5043]: I1125 09:34:26.597114 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-svvj4" Nov 25 09:34:26 crc kubenswrapper[5043]: I1125 09:34:26.878442 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rbwh6"] Nov 25 09:34:27 crc kubenswrapper[5043]: I1125 09:34:27.636290 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rbwh6" podUID="9667fe98-9391-43f6-9d26-2131532a0570" containerName="registry-server" containerID="cri-o://bfe0af472fc9f2ae95c5186a4958cbf1d383c429faeec8e0325d6244d4fe28bb" gracePeriod=2 Nov 25 09:34:27 crc kubenswrapper[5043]: I1125 09:34:27.649082 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-svvj4" podUID="cbf8ac91-46f0-4df9-9800-baa08ab7e14b" containerName="registry-server" probeResult="failure" output=< Nov 25 09:34:27 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 09:34:27 crc kubenswrapper[5043]: > Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.153299 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rbwh6" Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.230797 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwc2t\" (UniqueName: \"kubernetes.io/projected/9667fe98-9391-43f6-9d26-2131532a0570-kube-api-access-cwc2t\") pod \"9667fe98-9391-43f6-9d26-2131532a0570\" (UID: \"9667fe98-9391-43f6-9d26-2131532a0570\") " Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.230873 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9667fe98-9391-43f6-9d26-2131532a0570-catalog-content\") pod \"9667fe98-9391-43f6-9d26-2131532a0570\" (UID: \"9667fe98-9391-43f6-9d26-2131532a0570\") " Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.230967 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9667fe98-9391-43f6-9d26-2131532a0570-utilities\") pod \"9667fe98-9391-43f6-9d26-2131532a0570\" (UID: \"9667fe98-9391-43f6-9d26-2131532a0570\") " Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.232057 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9667fe98-9391-43f6-9d26-2131532a0570-utilities" (OuterVolumeSpecName: "utilities") pod "9667fe98-9391-43f6-9d26-2131532a0570" (UID: "9667fe98-9391-43f6-9d26-2131532a0570"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.244857 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9667fe98-9391-43f6-9d26-2131532a0570-kube-api-access-cwc2t" (OuterVolumeSpecName: "kube-api-access-cwc2t") pod "9667fe98-9391-43f6-9d26-2131532a0570" (UID: "9667fe98-9391-43f6-9d26-2131532a0570"). InnerVolumeSpecName "kube-api-access-cwc2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.278776 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9667fe98-9391-43f6-9d26-2131532a0570-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9667fe98-9391-43f6-9d26-2131532a0570" (UID: "9667fe98-9391-43f6-9d26-2131532a0570"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.333490 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwc2t\" (UniqueName: \"kubernetes.io/projected/9667fe98-9391-43f6-9d26-2131532a0570-kube-api-access-cwc2t\") on node \"crc\" DevicePath \"\"" Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.333527 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9667fe98-9391-43f6-9d26-2131532a0570-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.333556 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9667fe98-9391-43f6-9d26-2131532a0570-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.646866 5043 generic.go:334] "Generic (PLEG): container finished" podID="9667fe98-9391-43f6-9d26-2131532a0570" containerID="bfe0af472fc9f2ae95c5186a4958cbf1d383c429faeec8e0325d6244d4fe28bb" exitCode=0 Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.646916 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbwh6" event={"ID":"9667fe98-9391-43f6-9d26-2131532a0570","Type":"ContainerDied","Data":"bfe0af472fc9f2ae95c5186a4958cbf1d383c429faeec8e0325d6244d4fe28bb"} Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.647223 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbwh6" event={"ID":"9667fe98-9391-43f6-9d26-2131532a0570","Type":"ContainerDied","Data":"40443571e5c651ebf209574de5ed5039a93511631e64f3432b22ce2d36c15b13"} Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.646951 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rbwh6" Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.647241 5043 scope.go:117] "RemoveContainer" containerID="bfe0af472fc9f2ae95c5186a4958cbf1d383c429faeec8e0325d6244d4fe28bb" Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.686898 5043 scope.go:117] "RemoveContainer" containerID="d2770849e298d46a1d8600f047b002d6db5618951cb9342dd55251bdbd2e0d13" Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.690645 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rbwh6"] Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.702533 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rbwh6"] Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.710969 5043 scope.go:117] "RemoveContainer" containerID="a7c0687774e454d937e6db09c05c1ccad0085c4e356075196501a5f045f4cf9a" Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.761219 5043 scope.go:117] "RemoveContainer" containerID="bfe0af472fc9f2ae95c5186a4958cbf1d383c429faeec8e0325d6244d4fe28bb" Nov 25 09:34:28 crc kubenswrapper[5043]: E1125 09:34:28.761566 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfe0af472fc9f2ae95c5186a4958cbf1d383c429faeec8e0325d6244d4fe28bb\": container with ID starting with bfe0af472fc9f2ae95c5186a4958cbf1d383c429faeec8e0325d6244d4fe28bb not found: ID does not exist" containerID="bfe0af472fc9f2ae95c5186a4958cbf1d383c429faeec8e0325d6244d4fe28bb" Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.761595 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfe0af472fc9f2ae95c5186a4958cbf1d383c429faeec8e0325d6244d4fe28bb"} err="failed to get container status \"bfe0af472fc9f2ae95c5186a4958cbf1d383c429faeec8e0325d6244d4fe28bb\": rpc error: code = NotFound desc = could not find container \"bfe0af472fc9f2ae95c5186a4958cbf1d383c429faeec8e0325d6244d4fe28bb\": container with ID starting with bfe0af472fc9f2ae95c5186a4958cbf1d383c429faeec8e0325d6244d4fe28bb not found: ID does not exist" Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.761673 5043 scope.go:117] "RemoveContainer" containerID="d2770849e298d46a1d8600f047b002d6db5618951cb9342dd55251bdbd2e0d13" Nov 25 09:34:28 crc kubenswrapper[5043]: E1125 09:34:28.761937 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2770849e298d46a1d8600f047b002d6db5618951cb9342dd55251bdbd2e0d13\": container with ID starting with d2770849e298d46a1d8600f047b002d6db5618951cb9342dd55251bdbd2e0d13 not found: ID does not exist" containerID="d2770849e298d46a1d8600f047b002d6db5618951cb9342dd55251bdbd2e0d13" Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.761975 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2770849e298d46a1d8600f047b002d6db5618951cb9342dd55251bdbd2e0d13"} err="failed to get container status \"d2770849e298d46a1d8600f047b002d6db5618951cb9342dd55251bdbd2e0d13\": rpc error: code = NotFound desc = could not find container \"d2770849e298d46a1d8600f047b002d6db5618951cb9342dd55251bdbd2e0d13\": container with ID starting with d2770849e298d46a1d8600f047b002d6db5618951cb9342dd55251bdbd2e0d13 not found: ID does not exist" Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.761994 5043 scope.go:117] "RemoveContainer" containerID="a7c0687774e454d937e6db09c05c1ccad0085c4e356075196501a5f045f4cf9a" Nov 25 09:34:28 crc kubenswrapper[5043]: E1125 09:34:28.762453 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7c0687774e454d937e6db09c05c1ccad0085c4e356075196501a5f045f4cf9a\": container with ID starting with a7c0687774e454d937e6db09c05c1ccad0085c4e356075196501a5f045f4cf9a not found: ID does not exist" containerID="a7c0687774e454d937e6db09c05c1ccad0085c4e356075196501a5f045f4cf9a" Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.762504 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7c0687774e454d937e6db09c05c1ccad0085c4e356075196501a5f045f4cf9a"} err="failed to get container status \"a7c0687774e454d937e6db09c05c1ccad0085c4e356075196501a5f045f4cf9a\": rpc error: code = NotFound desc = could not find container \"a7c0687774e454d937e6db09c05c1ccad0085c4e356075196501a5f045f4cf9a\": container with ID starting with a7c0687774e454d937e6db09c05c1ccad0085c4e356075196501a5f045f4cf9a not found: ID does not exist" Nov 25 09:34:28 crc kubenswrapper[5043]: I1125 09:34:28.972942 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9667fe98-9391-43f6-9d26-2131532a0570" path="/var/lib/kubelet/pods/9667fe98-9391-43f6-9d26-2131532a0570/volumes" Nov 25 09:34:36 crc kubenswrapper[5043]: I1125 09:34:36.642435 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-svvj4" Nov 25 09:34:36 crc kubenswrapper[5043]: I1125 09:34:36.703869 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-svvj4" Nov 25 09:34:37 crc kubenswrapper[5043]: I1125 09:34:37.893034 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-svvj4"] Nov 25 09:34:37 crc kubenswrapper[5043]: I1125 09:34:37.898633 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-svvj4" podUID="cbf8ac91-46f0-4df9-9800-baa08ab7e14b" containerName="registry-server" containerID="cri-o://edd3d536796fc40a21fb99956c99744dd840b02b081aeeb1509cda047b226d41" gracePeriod=2 Nov 25 09:34:38 crc kubenswrapper[5043]: I1125 09:34:38.753743 5043 generic.go:334] "Generic (PLEG): container finished" podID="cbf8ac91-46f0-4df9-9800-baa08ab7e14b" containerID="edd3d536796fc40a21fb99956c99744dd840b02b081aeeb1509cda047b226d41" exitCode=0 Nov 25 09:34:38 crc kubenswrapper[5043]: I1125 09:34:38.754137 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-svvj4" event={"ID":"cbf8ac91-46f0-4df9-9800-baa08ab7e14b","Type":"ContainerDied","Data":"edd3d536796fc40a21fb99956c99744dd840b02b081aeeb1509cda047b226d41"} Nov 25 09:34:38 crc kubenswrapper[5043]: I1125 09:34:38.950445 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-svvj4" Nov 25 09:34:39 crc kubenswrapper[5043]: I1125 09:34:39.088401 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-catalog-content\") pod \"cbf8ac91-46f0-4df9-9800-baa08ab7e14b\" (UID: \"cbf8ac91-46f0-4df9-9800-baa08ab7e14b\") " Nov 25 09:34:39 crc kubenswrapper[5043]: I1125 09:34:39.088564 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-utilities\") pod \"cbf8ac91-46f0-4df9-9800-baa08ab7e14b\" (UID: \"cbf8ac91-46f0-4df9-9800-baa08ab7e14b\") " Nov 25 09:34:39 crc kubenswrapper[5043]: I1125 09:34:39.088670 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztwqj\" (UniqueName: \"kubernetes.io/projected/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-kube-api-access-ztwqj\") pod \"cbf8ac91-46f0-4df9-9800-baa08ab7e14b\" (UID: \"cbf8ac91-46f0-4df9-9800-baa08ab7e14b\") " Nov 25 09:34:39 crc kubenswrapper[5043]: I1125 09:34:39.089406 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-utilities" (OuterVolumeSpecName: "utilities") pod "cbf8ac91-46f0-4df9-9800-baa08ab7e14b" (UID: "cbf8ac91-46f0-4df9-9800-baa08ab7e14b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:34:39 crc kubenswrapper[5043]: I1125 09:34:39.106826 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-kube-api-access-ztwqj" (OuterVolumeSpecName: "kube-api-access-ztwqj") pod "cbf8ac91-46f0-4df9-9800-baa08ab7e14b" (UID: "cbf8ac91-46f0-4df9-9800-baa08ab7e14b"). InnerVolumeSpecName "kube-api-access-ztwqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:34:39 crc kubenswrapper[5043]: I1125 09:34:39.162762 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbf8ac91-46f0-4df9-9800-baa08ab7e14b" (UID: "cbf8ac91-46f0-4df9-9800-baa08ab7e14b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:34:39 crc kubenswrapper[5043]: I1125 09:34:39.191951 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztwqj\" (UniqueName: \"kubernetes.io/projected/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-kube-api-access-ztwqj\") on node \"crc\" DevicePath \"\"" Nov 25 09:34:39 crc kubenswrapper[5043]: I1125 09:34:39.192015 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:34:39 crc kubenswrapper[5043]: I1125 09:34:39.192030 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf8ac91-46f0-4df9-9800-baa08ab7e14b-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:34:39 crc kubenswrapper[5043]: I1125 09:34:39.766867 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-svvj4" event={"ID":"cbf8ac91-46f0-4df9-9800-baa08ab7e14b","Type":"ContainerDied","Data":"9d95b77611a37b0a24cc0cca4e12482808dc3d70918de75528c96e8d2182e9b7"} Nov 25 09:34:39 crc kubenswrapper[5043]: I1125 09:34:39.766916 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-svvj4" Nov 25 09:34:39 crc kubenswrapper[5043]: I1125 09:34:39.767156 5043 scope.go:117] "RemoveContainer" containerID="edd3d536796fc40a21fb99956c99744dd840b02b081aeeb1509cda047b226d41" Nov 25 09:34:39 crc kubenswrapper[5043]: I1125 09:34:39.789058 5043 scope.go:117] "RemoveContainer" containerID="ed2a54de8301b2fb1665e611988d3dde509f396102ee988bbcbd150b9c4bd276" Nov 25 09:34:39 crc kubenswrapper[5043]: I1125 09:34:39.810399 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-svvj4"] Nov 25 09:34:39 crc kubenswrapper[5043]: I1125 09:34:39.823939 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-svvj4"] Nov 25 09:34:39 crc kubenswrapper[5043]: I1125 09:34:39.848494 5043 scope.go:117] "RemoveContainer" containerID="818f7cf818a5ea926c1c78d049474646b7925bb64dd381081c65c3d107e6c402" Nov 25 09:34:40 crc kubenswrapper[5043]: I1125 09:34:40.973523 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf8ac91-46f0-4df9-9800-baa08ab7e14b" path="/var/lib/kubelet/pods/cbf8ac91-46f0-4df9-9800-baa08ab7e14b/volumes" Nov 25 09:35:19 crc kubenswrapper[5043]: I1125 09:35:19.175128 5043 generic.go:334] "Generic (PLEG): container finished" podID="452efbe7-7e6a-4e2a-8a22-1dfa69176628" containerID="998f90b5c04877d627c43718ff60baad5a07af5fe8927912cd343a3614200536" exitCode=0 Nov 25 09:35:19 crc kubenswrapper[5043]: I1125 09:35:19.175210 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"452efbe7-7e6a-4e2a-8a22-1dfa69176628","Type":"ContainerDied","Data":"998f90b5c04877d627c43718ff60baad5a07af5fe8927912cd343a3614200536"} Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.597227 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.648735 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-clouds-config\") pod \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.649030 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-openstack-config-secret\") pod \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.649106 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-ceph\") pod \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.649303 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-ephemeral-workdir\") pod \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.649407 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g4qb\" (UniqueName: \"kubernetes.io/projected/452efbe7-7e6a-4e2a-8a22-1dfa69176628-kube-api-access-7g4qb\") pod \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.649495 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.649600 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-ca-certs\") pod \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.649724 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-ephemeral-temporary\") pod \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\" (UID: \"452efbe7-7e6a-4e2a-8a22-1dfa69176628\") " Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.650519 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "452efbe7-7e6a-4e2a-8a22-1dfa69176628" (UID: "452efbe7-7e6a-4e2a-8a22-1dfa69176628"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.655174 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/452efbe7-7e6a-4e2a-8a22-1dfa69176628-kube-api-access-7g4qb" (OuterVolumeSpecName: "kube-api-access-7g4qb") pod "452efbe7-7e6a-4e2a-8a22-1dfa69176628" (UID: "452efbe7-7e6a-4e2a-8a22-1dfa69176628"). InnerVolumeSpecName "kube-api-access-7g4qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.655769 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "452efbe7-7e6a-4e2a-8a22-1dfa69176628" (UID: "452efbe7-7e6a-4e2a-8a22-1dfa69176628"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.661686 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-ceph" (OuterVolumeSpecName: "ceph") pod "452efbe7-7e6a-4e2a-8a22-1dfa69176628" (UID: "452efbe7-7e6a-4e2a-8a22-1dfa69176628"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.696351 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "452efbe7-7e6a-4e2a-8a22-1dfa69176628" (UID: "452efbe7-7e6a-4e2a-8a22-1dfa69176628"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.708221 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "452efbe7-7e6a-4e2a-8a22-1dfa69176628" (UID: "452efbe7-7e6a-4e2a-8a22-1dfa69176628"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.722197 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "452efbe7-7e6a-4e2a-8a22-1dfa69176628" (UID: "452efbe7-7e6a-4e2a-8a22-1dfa69176628"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.752954 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g4qb\" (UniqueName: \"kubernetes.io/projected/452efbe7-7e6a-4e2a-8a22-1dfa69176628-kube-api-access-7g4qb\") on node \"crc\" DevicePath \"\"" Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.753070 5043 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.753092 5043 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.753106 5043 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.753120 5043 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.753134 5043 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.753148 5043 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/452efbe7-7e6a-4e2a-8a22-1dfa69176628-ceph\") on node \"crc\" DevicePath \"\"" Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.788209 5043 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.854943 5043 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.873329 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "452efbe7-7e6a-4e2a-8a22-1dfa69176628" (UID: "452efbe7-7e6a-4e2a-8a22-1dfa69176628"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:35:20 crc kubenswrapper[5043]: I1125 09:35:20.956900 5043 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/452efbe7-7e6a-4e2a-8a22-1dfa69176628-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 09:35:21 crc kubenswrapper[5043]: I1125 09:35:21.194312 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"452efbe7-7e6a-4e2a-8a22-1dfa69176628","Type":"ContainerDied","Data":"1e7c75b2c25b23402895be94c4c009dc6f4ad4fd15c95012aeb9e2c676571173"} Nov 25 09:35:21 crc kubenswrapper[5043]: I1125 09:35:21.194355 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e7c75b2c25b23402895be94c4c009dc6f4ad4fd15c95012aeb9e2c676571173" Nov 25 09:35:21 crc kubenswrapper[5043]: I1125 09:35:21.194656 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.351805 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Nov 25 09:35:30 crc kubenswrapper[5043]: E1125 09:35:30.353125 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf8ac91-46f0-4df9-9800-baa08ab7e14b" containerName="extract-utilities" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.353150 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf8ac91-46f0-4df9-9800-baa08ab7e14b" containerName="extract-utilities" Nov 25 09:35:30 crc kubenswrapper[5043]: E1125 09:35:30.353208 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9667fe98-9391-43f6-9d26-2131532a0570" containerName="registry-server" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.353223 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="9667fe98-9391-43f6-9d26-2131532a0570" containerName="registry-server" Nov 25 09:35:30 crc kubenswrapper[5043]: E1125 09:35:30.353252 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452efbe7-7e6a-4e2a-8a22-1dfa69176628" containerName="horizontest-tests-horizontest" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.353267 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="452efbe7-7e6a-4e2a-8a22-1dfa69176628" containerName="horizontest-tests-horizontest" Nov 25 09:35:30 crc kubenswrapper[5043]: E1125 09:35:30.353311 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9667fe98-9391-43f6-9d26-2131532a0570" containerName="extract-utilities" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.353329 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="9667fe98-9391-43f6-9d26-2131532a0570" containerName="extract-utilities" Nov 25 09:35:30 crc kubenswrapper[5043]: E1125 09:35:30.353417 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf8ac91-46f0-4df9-9800-baa08ab7e14b" containerName="registry-server" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.353434 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf8ac91-46f0-4df9-9800-baa08ab7e14b" containerName="registry-server" Nov 25 09:35:30 crc kubenswrapper[5043]: E1125 09:35:30.353465 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf8ac91-46f0-4df9-9800-baa08ab7e14b" containerName="extract-content" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.353478 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf8ac91-46f0-4df9-9800-baa08ab7e14b" containerName="extract-content" Nov 25 09:35:30 crc kubenswrapper[5043]: E1125 09:35:30.353500 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9667fe98-9391-43f6-9d26-2131532a0570" containerName="extract-content" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.353513 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="9667fe98-9391-43f6-9d26-2131532a0570" containerName="extract-content" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.353941 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf8ac91-46f0-4df9-9800-baa08ab7e14b" containerName="registry-server" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.353965 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="452efbe7-7e6a-4e2a-8a22-1dfa69176628" containerName="horizontest-tests-horizontest" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.354003 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="9667fe98-9391-43f6-9d26-2131532a0570" containerName="registry-server" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.355301 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.363848 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.549814 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"04e243c7-381a-4239-b3db-881eed1db744\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.550243 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sclmx\" (UniqueName: \"kubernetes.io/projected/04e243c7-381a-4239-b3db-881eed1db744-kube-api-access-sclmx\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"04e243c7-381a-4239-b3db-881eed1db744\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.651615 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sclmx\" (UniqueName: \"kubernetes.io/projected/04e243c7-381a-4239-b3db-881eed1db744-kube-api-access-sclmx\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"04e243c7-381a-4239-b3db-881eed1db744\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.651761 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"04e243c7-381a-4239-b3db-881eed1db744\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.652210 5043 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"04e243c7-381a-4239-b3db-881eed1db744\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.677582 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sclmx\" (UniqueName: \"kubernetes.io/projected/04e243c7-381a-4239-b3db-881eed1db744-kube-api-access-sclmx\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"04e243c7-381a-4239-b3db-881eed1db744\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.680842 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"04e243c7-381a-4239-b3db-881eed1db744\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 09:35:30 crc kubenswrapper[5043]: I1125 09:35:30.685686 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Nov 25 09:35:30 crc kubenswrapper[5043]: E1125 09:35:30.685769 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:35:31 crc kubenswrapper[5043]: I1125 09:35:31.187757 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Nov 25 09:35:31 crc kubenswrapper[5043]: E1125 09:35:31.191692 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:35:31 crc kubenswrapper[5043]: I1125 09:35:31.297132 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" event={"ID":"04e243c7-381a-4239-b3db-881eed1db744","Type":"ContainerStarted","Data":"5a37bc98414a0549475764b40104cfded8c14a528273d45d7c2fe45145b15a6f"} Nov 25 09:35:33 crc kubenswrapper[5043]: E1125 09:35:33.577295 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:35:34 crc kubenswrapper[5043]: I1125 09:35:34.324940 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" event={"ID":"04e243c7-381a-4239-b3db-881eed1db744","Type":"ContainerStarted","Data":"2c8bd4c3b9efc7720614857bc858fab3d5b8e0be64ac16e94136d670e7ff3fce"} Nov 25 09:35:34 crc kubenswrapper[5043]: E1125 09:35:34.325716 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:35:34 crc kubenswrapper[5043]: I1125 09:35:34.341218 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" podStartSLOduration=1.9568671389999999 podStartE2EDuration="4.341196486s" podCreationTimestamp="2025-11-25 09:35:30 +0000 UTC" firstStartedPulling="2025-11-25 09:35:31.192888312 +0000 UTC m=+8395.361084033" lastFinishedPulling="2025-11-25 09:35:33.577217659 +0000 UTC m=+8397.745413380" observedRunningTime="2025-11-25 09:35:34.335730879 +0000 UTC m=+8398.503926600" watchObservedRunningTime="2025-11-25 09:35:34.341196486 +0000 UTC m=+8398.509392207" Nov 25 09:35:35 crc kubenswrapper[5043]: E1125 09:35:35.343445 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:35:47 crc kubenswrapper[5043]: I1125 09:35:47.276343 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:35:47 crc kubenswrapper[5043]: I1125 09:35:47.278525 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:36:01 crc kubenswrapper[5043]: I1125 09:36:01.509390 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vfwzj/must-gather-htgmv"] Nov 25 09:36:01 crc kubenswrapper[5043]: I1125 09:36:01.514438 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfwzj/must-gather-htgmv" Nov 25 09:36:01 crc kubenswrapper[5043]: I1125 09:36:01.516755 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vfwzj"/"openshift-service-ca.crt" Nov 25 09:36:01 crc kubenswrapper[5043]: I1125 09:36:01.517673 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vfwzj"/"kube-root-ca.crt" Nov 25 09:36:01 crc kubenswrapper[5043]: I1125 09:36:01.519953 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vfwzj"/"default-dockercfg-mlzhd" Nov 25 09:36:01 crc kubenswrapper[5043]: I1125 09:36:01.533662 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vfwzj/must-gather-htgmv"] Nov 25 09:36:01 crc kubenswrapper[5043]: I1125 09:36:01.675640 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkrqb\" (UniqueName: \"kubernetes.io/projected/998025b2-ce6c-47f2-983d-a5f4215c1bd9-kube-api-access-xkrqb\") pod \"must-gather-htgmv\" (UID: \"998025b2-ce6c-47f2-983d-a5f4215c1bd9\") " pod="openshift-must-gather-vfwzj/must-gather-htgmv" Nov 25 09:36:01 crc kubenswrapper[5043]: I1125 09:36:01.675789 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/998025b2-ce6c-47f2-983d-a5f4215c1bd9-must-gather-output\") pod \"must-gather-htgmv\" (UID: \"998025b2-ce6c-47f2-983d-a5f4215c1bd9\") " pod="openshift-must-gather-vfwzj/must-gather-htgmv" Nov 25 09:36:01 crc kubenswrapper[5043]: I1125 09:36:01.777116 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkrqb\" (UniqueName: \"kubernetes.io/projected/998025b2-ce6c-47f2-983d-a5f4215c1bd9-kube-api-access-xkrqb\") pod \"must-gather-htgmv\" (UID: \"998025b2-ce6c-47f2-983d-a5f4215c1bd9\") " pod="openshift-must-gather-vfwzj/must-gather-htgmv" Nov 25 09:36:01 crc kubenswrapper[5043]: I1125 09:36:01.777282 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/998025b2-ce6c-47f2-983d-a5f4215c1bd9-must-gather-output\") pod \"must-gather-htgmv\" (UID: \"998025b2-ce6c-47f2-983d-a5f4215c1bd9\") " pod="openshift-must-gather-vfwzj/must-gather-htgmv" Nov 25 09:36:01 crc kubenswrapper[5043]: I1125 09:36:01.778062 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/998025b2-ce6c-47f2-983d-a5f4215c1bd9-must-gather-output\") pod \"must-gather-htgmv\" (UID: \"998025b2-ce6c-47f2-983d-a5f4215c1bd9\") " pod="openshift-must-gather-vfwzj/must-gather-htgmv" Nov 25 09:36:01 crc kubenswrapper[5043]: I1125 09:36:01.800320 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkrqb\" (UniqueName: \"kubernetes.io/projected/998025b2-ce6c-47f2-983d-a5f4215c1bd9-kube-api-access-xkrqb\") pod \"must-gather-htgmv\" (UID: \"998025b2-ce6c-47f2-983d-a5f4215c1bd9\") " pod="openshift-must-gather-vfwzj/must-gather-htgmv" Nov 25 09:36:01 crc kubenswrapper[5043]: I1125 09:36:01.848015 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfwzj/must-gather-htgmv" Nov 25 09:36:02 crc kubenswrapper[5043]: I1125 09:36:02.349921 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vfwzj/must-gather-htgmv"] Nov 25 09:36:02 crc kubenswrapper[5043]: I1125 09:36:02.629902 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vfwzj/must-gather-htgmv" event={"ID":"998025b2-ce6c-47f2-983d-a5f4215c1bd9","Type":"ContainerStarted","Data":"f91a70e4f6673f41faf9985f75e86f6a54d3c83bba45aa5fae9d77dc995ce8d6"} Nov 25 09:36:09 crc kubenswrapper[5043]: I1125 09:36:09.714093 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vfwzj/must-gather-htgmv" event={"ID":"998025b2-ce6c-47f2-983d-a5f4215c1bd9","Type":"ContainerStarted","Data":"901669183c4d36d2f56df153ae4a97f45017ed2ee783a0aa9cba1de0c8c4fb1d"} Nov 25 09:36:09 crc kubenswrapper[5043]: I1125 09:36:09.714625 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vfwzj/must-gather-htgmv" event={"ID":"998025b2-ce6c-47f2-983d-a5f4215c1bd9","Type":"ContainerStarted","Data":"61fc30633e99fa438a72b954f1d0a3008f82f577f30e0edb9284ceeeb4b2f6d4"} Nov 25 09:36:09 crc kubenswrapper[5043]: I1125 09:36:09.731578 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vfwzj/must-gather-htgmv" podStartSLOduration=2.164817986 podStartE2EDuration="8.731559002s" podCreationTimestamp="2025-11-25 09:36:01 +0000 UTC" firstStartedPulling="2025-11-25 09:36:02.345665599 +0000 UTC m=+8426.513861320" lastFinishedPulling="2025-11-25 09:36:08.912406615 +0000 UTC m=+8433.080602336" observedRunningTime="2025-11-25 09:36:09.728084158 +0000 UTC m=+8433.896279899" watchObservedRunningTime="2025-11-25 09:36:09.731559002 +0000 UTC m=+8433.899754723" Nov 25 09:36:17 crc kubenswrapper[5043]: I1125 09:36:17.281090 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:36:17 crc kubenswrapper[5043]: I1125 09:36:17.281697 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:36:18 crc kubenswrapper[5043]: I1125 09:36:18.901064 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vfwzj/crc-debug-28d72"] Nov 25 09:36:18 crc kubenswrapper[5043]: I1125 09:36:18.903186 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfwzj/crc-debug-28d72" Nov 25 09:36:19 crc kubenswrapper[5043]: I1125 09:36:19.068943 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5kwq\" (UniqueName: \"kubernetes.io/projected/0f06b2c3-b498-493e-b980-e10a43d64efd-kube-api-access-k5kwq\") pod \"crc-debug-28d72\" (UID: \"0f06b2c3-b498-493e-b980-e10a43d64efd\") " pod="openshift-must-gather-vfwzj/crc-debug-28d72" Nov 25 09:36:19 crc kubenswrapper[5043]: I1125 09:36:19.069456 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f06b2c3-b498-493e-b980-e10a43d64efd-host\") pod \"crc-debug-28d72\" (UID: \"0f06b2c3-b498-493e-b980-e10a43d64efd\") " pod="openshift-must-gather-vfwzj/crc-debug-28d72" Nov 25 09:36:19 crc kubenswrapper[5043]: I1125 09:36:19.170886 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5kwq\" (UniqueName: \"kubernetes.io/projected/0f06b2c3-b498-493e-b980-e10a43d64efd-kube-api-access-k5kwq\") pod \"crc-debug-28d72\" (UID: \"0f06b2c3-b498-493e-b980-e10a43d64efd\") " pod="openshift-must-gather-vfwzj/crc-debug-28d72" Nov 25 09:36:19 crc kubenswrapper[5043]: I1125 09:36:19.171028 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f06b2c3-b498-493e-b980-e10a43d64efd-host\") pod \"crc-debug-28d72\" (UID: \"0f06b2c3-b498-493e-b980-e10a43d64efd\") " pod="openshift-must-gather-vfwzj/crc-debug-28d72" Nov 25 09:36:19 crc kubenswrapper[5043]: I1125 09:36:19.171159 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f06b2c3-b498-493e-b980-e10a43d64efd-host\") pod \"crc-debug-28d72\" (UID: \"0f06b2c3-b498-493e-b980-e10a43d64efd\") " pod="openshift-must-gather-vfwzj/crc-debug-28d72" Nov 25 09:36:19 crc kubenswrapper[5043]: I1125 09:36:19.204738 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5kwq\" (UniqueName: \"kubernetes.io/projected/0f06b2c3-b498-493e-b980-e10a43d64efd-kube-api-access-k5kwq\") pod \"crc-debug-28d72\" (UID: \"0f06b2c3-b498-493e-b980-e10a43d64efd\") " pod="openshift-must-gather-vfwzj/crc-debug-28d72" Nov 25 09:36:19 crc kubenswrapper[5043]: I1125 09:36:19.225739 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfwzj/crc-debug-28d72" Nov 25 09:36:19 crc kubenswrapper[5043]: I1125 09:36:19.833794 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vfwzj/crc-debug-28d72" event={"ID":"0f06b2c3-b498-493e-b980-e10a43d64efd","Type":"ContainerStarted","Data":"7e646a60d53f05aa8c865692eacb4aa067424d282be0dccf6adb498727559a2a"} Nov 25 09:36:31 crc kubenswrapper[5043]: I1125 09:36:31.962678 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vfwzj/crc-debug-28d72" event={"ID":"0f06b2c3-b498-493e-b980-e10a43d64efd","Type":"ContainerStarted","Data":"5c626ea825603a51266732c9c385323a81b2bd2ba5f2b3a43f8bed9993bb361d"} Nov 25 09:36:47 crc kubenswrapper[5043]: I1125 09:36:47.275884 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:36:47 crc kubenswrapper[5043]: I1125 09:36:47.276526 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:36:47 crc kubenswrapper[5043]: I1125 09:36:47.276572 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 09:36:47 crc kubenswrapper[5043]: I1125 09:36:47.277353 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:36:47 crc kubenswrapper[5043]: I1125 09:36:47.277424 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" gracePeriod=600 Nov 25 09:36:51 crc kubenswrapper[5043]: I1125 09:36:51.141730 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" exitCode=0 Nov 25 09:36:51 crc kubenswrapper[5043]: I1125 09:36:51.141802 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880"} Nov 25 09:36:51 crc kubenswrapper[5043]: I1125 09:36:51.143447 5043 scope.go:117] "RemoveContainer" containerID="c03b8cdf1887b6f6e3c18a315ebbcdbe865cf251968052a9886eadd6550871b9" Nov 25 09:36:53 crc kubenswrapper[5043]: E1125 09:36:53.140007 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:36:53 crc kubenswrapper[5043]: I1125 09:36:53.218545 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:36:53 crc kubenswrapper[5043]: E1125 09:36:53.218858 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:36:53 crc kubenswrapper[5043]: I1125 09:36:53.268529 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vfwzj/crc-debug-28d72" podStartSLOduration=23.007462819 podStartE2EDuration="35.268511556s" podCreationTimestamp="2025-11-25 09:36:18 +0000 UTC" firstStartedPulling="2025-11-25 09:36:19.299980726 +0000 UTC m=+8443.468176447" lastFinishedPulling="2025-11-25 09:36:31.561029463 +0000 UTC m=+8455.729225184" observedRunningTime="2025-11-25 09:36:32.998404217 +0000 UTC m=+8457.166599948" watchObservedRunningTime="2025-11-25 09:36:53.268511556 +0000 UTC m=+8477.436707277" Nov 25 09:36:57 crc kubenswrapper[5043]: E1125 09:36:57.963098 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:37:04 crc kubenswrapper[5043]: I1125 09:37:04.962212 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:37:04 crc kubenswrapper[5043]: E1125 09:37:04.962945 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:37:19 crc kubenswrapper[5043]: I1125 09:37:19.963408 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:37:19 crc kubenswrapper[5043]: E1125 09:37:19.964329 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:37:33 crc kubenswrapper[5043]: I1125 09:37:33.962942 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:37:33 crc kubenswrapper[5043]: E1125 09:37:33.965100 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:37:43 crc kubenswrapper[5043]: I1125 09:37:43.742787 5043 generic.go:334] "Generic (PLEG): container finished" podID="0f06b2c3-b498-493e-b980-e10a43d64efd" containerID="5c626ea825603a51266732c9c385323a81b2bd2ba5f2b3a43f8bed9993bb361d" exitCode=0 Nov 25 09:37:43 crc kubenswrapper[5043]: I1125 09:37:43.742880 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vfwzj/crc-debug-28d72" event={"ID":"0f06b2c3-b498-493e-b980-e10a43d64efd","Type":"ContainerDied","Data":"5c626ea825603a51266732c9c385323a81b2bd2ba5f2b3a43f8bed9993bb361d"} Nov 25 09:37:44 crc kubenswrapper[5043]: I1125 09:37:44.879694 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfwzj/crc-debug-28d72" Nov 25 09:37:44 crc kubenswrapper[5043]: I1125 09:37:44.910536 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vfwzj/crc-debug-28d72"] Nov 25 09:37:44 crc kubenswrapper[5043]: I1125 09:37:44.919610 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vfwzj/crc-debug-28d72"] Nov 25 09:37:44 crc kubenswrapper[5043]: I1125 09:37:44.950666 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5kwq\" (UniqueName: \"kubernetes.io/projected/0f06b2c3-b498-493e-b980-e10a43d64efd-kube-api-access-k5kwq\") pod \"0f06b2c3-b498-493e-b980-e10a43d64efd\" (UID: \"0f06b2c3-b498-493e-b980-e10a43d64efd\") " Nov 25 09:37:44 crc kubenswrapper[5043]: I1125 09:37:44.950721 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f06b2c3-b498-493e-b980-e10a43d64efd-host\") pod \"0f06b2c3-b498-493e-b980-e10a43d64efd\" (UID: \"0f06b2c3-b498-493e-b980-e10a43d64efd\") " Nov 25 09:37:44 crc kubenswrapper[5043]: I1125 09:37:44.951079 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f06b2c3-b498-493e-b980-e10a43d64efd-host" (OuterVolumeSpecName: "host") pod "0f06b2c3-b498-493e-b980-e10a43d64efd" (UID: "0f06b2c3-b498-493e-b980-e10a43d64efd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:37:44 crc kubenswrapper[5043]: I1125 09:37:44.951309 5043 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f06b2c3-b498-493e-b980-e10a43d64efd-host\") on node \"crc\" DevicePath \"\"" Nov 25 09:37:44 crc kubenswrapper[5043]: I1125 09:37:44.956584 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f06b2c3-b498-493e-b980-e10a43d64efd-kube-api-access-k5kwq" (OuterVolumeSpecName: "kube-api-access-k5kwq") pod "0f06b2c3-b498-493e-b980-e10a43d64efd" (UID: "0f06b2c3-b498-493e-b980-e10a43d64efd"). InnerVolumeSpecName "kube-api-access-k5kwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:37:44 crc kubenswrapper[5043]: I1125 09:37:44.972912 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f06b2c3-b498-493e-b980-e10a43d64efd" path="/var/lib/kubelet/pods/0f06b2c3-b498-493e-b980-e10a43d64efd/volumes" Nov 25 09:37:45 crc kubenswrapper[5043]: I1125 09:37:45.053919 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5kwq\" (UniqueName: \"kubernetes.io/projected/0f06b2c3-b498-493e-b980-e10a43d64efd-kube-api-access-k5kwq\") on node \"crc\" DevicePath \"\"" Nov 25 09:37:45 crc kubenswrapper[5043]: I1125 09:37:45.767373 5043 scope.go:117] "RemoveContainer" containerID="5c626ea825603a51266732c9c385323a81b2bd2ba5f2b3a43f8bed9993bb361d" Nov 25 09:37:45 crc kubenswrapper[5043]: I1125 09:37:45.767579 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfwzj/crc-debug-28d72" Nov 25 09:37:46 crc kubenswrapper[5043]: I1125 09:37:46.095084 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vfwzj/crc-debug-p5m4m"] Nov 25 09:37:46 crc kubenswrapper[5043]: E1125 09:37:46.096038 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f06b2c3-b498-493e-b980-e10a43d64efd" containerName="container-00" Nov 25 09:37:46 crc kubenswrapper[5043]: I1125 09:37:46.096053 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f06b2c3-b498-493e-b980-e10a43d64efd" containerName="container-00" Nov 25 09:37:46 crc kubenswrapper[5043]: I1125 09:37:46.096297 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f06b2c3-b498-493e-b980-e10a43d64efd" containerName="container-00" Nov 25 09:37:46 crc kubenswrapper[5043]: I1125 09:37:46.097352 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfwzj/crc-debug-p5m4m" Nov 25 09:37:46 crc kubenswrapper[5043]: I1125 09:37:46.195266 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdk5w\" (UniqueName: \"kubernetes.io/projected/bf4a9e89-7ba7-4253-9856-d2614fe92fb7-kube-api-access-bdk5w\") pod \"crc-debug-p5m4m\" (UID: \"bf4a9e89-7ba7-4253-9856-d2614fe92fb7\") " pod="openshift-must-gather-vfwzj/crc-debug-p5m4m" Nov 25 09:37:46 crc kubenswrapper[5043]: I1125 09:37:46.195321 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf4a9e89-7ba7-4253-9856-d2614fe92fb7-host\") pod \"crc-debug-p5m4m\" (UID: \"bf4a9e89-7ba7-4253-9856-d2614fe92fb7\") " pod="openshift-must-gather-vfwzj/crc-debug-p5m4m" Nov 25 09:37:46 crc kubenswrapper[5043]: I1125 09:37:46.297946 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdk5w\" (UniqueName: \"kubernetes.io/projected/bf4a9e89-7ba7-4253-9856-d2614fe92fb7-kube-api-access-bdk5w\") pod \"crc-debug-p5m4m\" (UID: \"bf4a9e89-7ba7-4253-9856-d2614fe92fb7\") " pod="openshift-must-gather-vfwzj/crc-debug-p5m4m" Nov 25 09:37:46 crc kubenswrapper[5043]: I1125 09:37:46.297985 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf4a9e89-7ba7-4253-9856-d2614fe92fb7-host\") pod \"crc-debug-p5m4m\" (UID: \"bf4a9e89-7ba7-4253-9856-d2614fe92fb7\") " pod="openshift-must-gather-vfwzj/crc-debug-p5m4m" Nov 25 09:37:46 crc kubenswrapper[5043]: I1125 09:37:46.298104 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf4a9e89-7ba7-4253-9856-d2614fe92fb7-host\") pod \"crc-debug-p5m4m\" (UID: \"bf4a9e89-7ba7-4253-9856-d2614fe92fb7\") " pod="openshift-must-gather-vfwzj/crc-debug-p5m4m" Nov 25 09:37:46 crc kubenswrapper[5043]: I1125 09:37:46.320556 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdk5w\" (UniqueName: \"kubernetes.io/projected/bf4a9e89-7ba7-4253-9856-d2614fe92fb7-kube-api-access-bdk5w\") pod \"crc-debug-p5m4m\" (UID: \"bf4a9e89-7ba7-4253-9856-d2614fe92fb7\") " pod="openshift-must-gather-vfwzj/crc-debug-p5m4m" Nov 25 09:37:46 crc kubenswrapper[5043]: I1125 09:37:46.417091 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfwzj/crc-debug-p5m4m" Nov 25 09:37:46 crc kubenswrapper[5043]: I1125 09:37:46.781253 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vfwzj/crc-debug-p5m4m" event={"ID":"bf4a9e89-7ba7-4253-9856-d2614fe92fb7","Type":"ContainerStarted","Data":"2eae8380b688d2219bda1b06c099dec0809c947930dd32856501dfdaaf504e32"} Nov 25 09:37:46 crc kubenswrapper[5043]: I1125 09:37:46.781644 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vfwzj/crc-debug-p5m4m" event={"ID":"bf4a9e89-7ba7-4253-9856-d2614fe92fb7","Type":"ContainerStarted","Data":"83ff54020744151cd4d3f86623affcf28e287482eefbcfdbbca65b1d26460c46"} Nov 25 09:37:46 crc kubenswrapper[5043]: I1125 09:37:46.805432 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vfwzj/crc-debug-p5m4m" podStartSLOduration=0.805399467 podStartE2EDuration="805.399467ms" podCreationTimestamp="2025-11-25 09:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:37:46.797270377 +0000 UTC m=+8530.965466108" watchObservedRunningTime="2025-11-25 09:37:46.805399467 +0000 UTC m=+8530.973595188" Nov 25 09:37:46 crc kubenswrapper[5043]: I1125 09:37:46.968851 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:37:46 crc kubenswrapper[5043]: E1125 09:37:46.969142 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:37:47 crc kubenswrapper[5043]: I1125 09:37:47.796760 5043 generic.go:334] "Generic (PLEG): container finished" podID="bf4a9e89-7ba7-4253-9856-d2614fe92fb7" containerID="2eae8380b688d2219bda1b06c099dec0809c947930dd32856501dfdaaf504e32" exitCode=0 Nov 25 09:37:47 crc kubenswrapper[5043]: I1125 09:37:47.796859 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vfwzj/crc-debug-p5m4m" event={"ID":"bf4a9e89-7ba7-4253-9856-d2614fe92fb7","Type":"ContainerDied","Data":"2eae8380b688d2219bda1b06c099dec0809c947930dd32856501dfdaaf504e32"} Nov 25 09:37:48 crc kubenswrapper[5043]: I1125 09:37:48.899345 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfwzj/crc-debug-p5m4m" Nov 25 09:37:49 crc kubenswrapper[5043]: I1125 09:37:49.044992 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf4a9e89-7ba7-4253-9856-d2614fe92fb7-host\") pod \"bf4a9e89-7ba7-4253-9856-d2614fe92fb7\" (UID: \"bf4a9e89-7ba7-4253-9856-d2614fe92fb7\") " Nov 25 09:37:49 crc kubenswrapper[5043]: I1125 09:37:49.045145 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf4a9e89-7ba7-4253-9856-d2614fe92fb7-host" (OuterVolumeSpecName: "host") pod "bf4a9e89-7ba7-4253-9856-d2614fe92fb7" (UID: "bf4a9e89-7ba7-4253-9856-d2614fe92fb7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:37:49 crc kubenswrapper[5043]: I1125 09:37:49.045163 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdk5w\" (UniqueName: \"kubernetes.io/projected/bf4a9e89-7ba7-4253-9856-d2614fe92fb7-kube-api-access-bdk5w\") pod \"bf4a9e89-7ba7-4253-9856-d2614fe92fb7\" (UID: \"bf4a9e89-7ba7-4253-9856-d2614fe92fb7\") " Nov 25 09:37:49 crc kubenswrapper[5043]: I1125 09:37:49.046709 5043 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf4a9e89-7ba7-4253-9856-d2614fe92fb7-host\") on node \"crc\" DevicePath \"\"" Nov 25 09:37:49 crc kubenswrapper[5043]: I1125 09:37:49.053642 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf4a9e89-7ba7-4253-9856-d2614fe92fb7-kube-api-access-bdk5w" (OuterVolumeSpecName: "kube-api-access-bdk5w") pod "bf4a9e89-7ba7-4253-9856-d2614fe92fb7" (UID: "bf4a9e89-7ba7-4253-9856-d2614fe92fb7"). InnerVolumeSpecName "kube-api-access-bdk5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:37:49 crc kubenswrapper[5043]: I1125 09:37:49.148211 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdk5w\" (UniqueName: \"kubernetes.io/projected/bf4a9e89-7ba7-4253-9856-d2614fe92fb7-kube-api-access-bdk5w\") on node \"crc\" DevicePath \"\"" Nov 25 09:37:49 crc kubenswrapper[5043]: I1125 09:37:49.694240 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vfwzj/crc-debug-p5m4m"] Nov 25 09:37:49 crc kubenswrapper[5043]: I1125 09:37:49.708301 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vfwzj/crc-debug-p5m4m"] Nov 25 09:37:49 crc kubenswrapper[5043]: I1125 09:37:49.827943 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83ff54020744151cd4d3f86623affcf28e287482eefbcfdbbca65b1d26460c46" Nov 25 09:37:49 crc kubenswrapper[5043]: I1125 09:37:49.828071 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfwzj/crc-debug-p5m4m" Nov 25 09:37:50 crc kubenswrapper[5043]: I1125 09:37:50.973225 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf4a9e89-7ba7-4253-9856-d2614fe92fb7" path="/var/lib/kubelet/pods/bf4a9e89-7ba7-4253-9856-d2614fe92fb7/volumes" Nov 25 09:37:50 crc kubenswrapper[5043]: I1125 09:37:50.973991 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vfwzj/crc-debug-pp7kz"] Nov 25 09:37:50 crc kubenswrapper[5043]: E1125 09:37:50.974270 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf4a9e89-7ba7-4253-9856-d2614fe92fb7" containerName="container-00" Nov 25 09:37:50 crc kubenswrapper[5043]: I1125 09:37:50.974280 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf4a9e89-7ba7-4253-9856-d2614fe92fb7" containerName="container-00" Nov 25 09:37:50 crc kubenswrapper[5043]: I1125 09:37:50.974481 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf4a9e89-7ba7-4253-9856-d2614fe92fb7" containerName="container-00" Nov 25 09:37:50 crc kubenswrapper[5043]: I1125 09:37:50.975117 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfwzj/crc-debug-pp7kz" Nov 25 09:37:51 crc kubenswrapper[5043]: I1125 09:37:51.089526 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f884v\" (UniqueName: \"kubernetes.io/projected/1c2f85a5-03b7-40f7-aad8-dabb53ffaee3-kube-api-access-f884v\") pod \"crc-debug-pp7kz\" (UID: \"1c2f85a5-03b7-40f7-aad8-dabb53ffaee3\") " pod="openshift-must-gather-vfwzj/crc-debug-pp7kz" Nov 25 09:37:51 crc kubenswrapper[5043]: I1125 09:37:51.089936 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c2f85a5-03b7-40f7-aad8-dabb53ffaee3-host\") pod \"crc-debug-pp7kz\" (UID: \"1c2f85a5-03b7-40f7-aad8-dabb53ffaee3\") " pod="openshift-must-gather-vfwzj/crc-debug-pp7kz" Nov 25 09:37:51 crc kubenswrapper[5043]: I1125 09:37:51.191774 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f884v\" (UniqueName: \"kubernetes.io/projected/1c2f85a5-03b7-40f7-aad8-dabb53ffaee3-kube-api-access-f884v\") pod \"crc-debug-pp7kz\" (UID: \"1c2f85a5-03b7-40f7-aad8-dabb53ffaee3\") " pod="openshift-must-gather-vfwzj/crc-debug-pp7kz" Nov 25 09:37:51 crc kubenswrapper[5043]: I1125 09:37:51.191961 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c2f85a5-03b7-40f7-aad8-dabb53ffaee3-host\") pod \"crc-debug-pp7kz\" (UID: \"1c2f85a5-03b7-40f7-aad8-dabb53ffaee3\") " pod="openshift-must-gather-vfwzj/crc-debug-pp7kz" Nov 25 09:37:51 crc kubenswrapper[5043]: I1125 09:37:51.192060 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c2f85a5-03b7-40f7-aad8-dabb53ffaee3-host\") pod \"crc-debug-pp7kz\" (UID: \"1c2f85a5-03b7-40f7-aad8-dabb53ffaee3\") " pod="openshift-must-gather-vfwzj/crc-debug-pp7kz" Nov 25 09:37:51 crc kubenswrapper[5043]: I1125 09:37:51.214577 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f884v\" (UniqueName: \"kubernetes.io/projected/1c2f85a5-03b7-40f7-aad8-dabb53ffaee3-kube-api-access-f884v\") pod \"crc-debug-pp7kz\" (UID: \"1c2f85a5-03b7-40f7-aad8-dabb53ffaee3\") " pod="openshift-must-gather-vfwzj/crc-debug-pp7kz" Nov 25 09:37:51 crc kubenswrapper[5043]: I1125 09:37:51.292396 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfwzj/crc-debug-pp7kz" Nov 25 09:37:51 crc kubenswrapper[5043]: W1125 09:37:51.329194 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c2f85a5_03b7_40f7_aad8_dabb53ffaee3.slice/crio-0aa82e3083b87e3324af3993d897db09e5858ba3bab5f25a3c40c5f8c62ee434 WatchSource:0}: Error finding container 0aa82e3083b87e3324af3993d897db09e5858ba3bab5f25a3c40c5f8c62ee434: Status 404 returned error can't find the container with id 0aa82e3083b87e3324af3993d897db09e5858ba3bab5f25a3c40c5f8c62ee434 Nov 25 09:37:51 crc kubenswrapper[5043]: I1125 09:37:51.868725 5043 generic.go:334] "Generic (PLEG): container finished" podID="1c2f85a5-03b7-40f7-aad8-dabb53ffaee3" containerID="d8e597ba1756935a0465368db3d557f9750656487cacd3d124c8bc664c6836e9" exitCode=0 Nov 25 09:37:51 crc kubenswrapper[5043]: I1125 09:37:51.868820 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vfwzj/crc-debug-pp7kz" event={"ID":"1c2f85a5-03b7-40f7-aad8-dabb53ffaee3","Type":"ContainerDied","Data":"d8e597ba1756935a0465368db3d557f9750656487cacd3d124c8bc664c6836e9"} Nov 25 09:37:51 crc kubenswrapper[5043]: I1125 09:37:51.869467 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vfwzj/crc-debug-pp7kz" event={"ID":"1c2f85a5-03b7-40f7-aad8-dabb53ffaee3","Type":"ContainerStarted","Data":"0aa82e3083b87e3324af3993d897db09e5858ba3bab5f25a3c40c5f8c62ee434"} Nov 25 09:37:51 crc kubenswrapper[5043]: I1125 09:37:51.908007 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vfwzj/crc-debug-pp7kz"] Nov 25 09:37:51 crc kubenswrapper[5043]: I1125 09:37:51.917346 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vfwzj/crc-debug-pp7kz"] Nov 25 09:37:52 crc kubenswrapper[5043]: I1125 09:37:52.983485 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfwzj/crc-debug-pp7kz" Nov 25 09:37:53 crc kubenswrapper[5043]: I1125 09:37:53.130810 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c2f85a5-03b7-40f7-aad8-dabb53ffaee3-host\") pod \"1c2f85a5-03b7-40f7-aad8-dabb53ffaee3\" (UID: \"1c2f85a5-03b7-40f7-aad8-dabb53ffaee3\") " Nov 25 09:37:53 crc kubenswrapper[5043]: I1125 09:37:53.131186 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c2f85a5-03b7-40f7-aad8-dabb53ffaee3-host" (OuterVolumeSpecName: "host") pod "1c2f85a5-03b7-40f7-aad8-dabb53ffaee3" (UID: "1c2f85a5-03b7-40f7-aad8-dabb53ffaee3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:37:53 crc kubenswrapper[5043]: I1125 09:37:53.131311 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f884v\" (UniqueName: \"kubernetes.io/projected/1c2f85a5-03b7-40f7-aad8-dabb53ffaee3-kube-api-access-f884v\") pod \"1c2f85a5-03b7-40f7-aad8-dabb53ffaee3\" (UID: \"1c2f85a5-03b7-40f7-aad8-dabb53ffaee3\") " Nov 25 09:37:53 crc kubenswrapper[5043]: I1125 09:37:53.132057 5043 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1c2f85a5-03b7-40f7-aad8-dabb53ffaee3-host\") on node \"crc\" DevicePath \"\"" Nov 25 09:37:53 crc kubenswrapper[5043]: I1125 09:37:53.139038 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c2f85a5-03b7-40f7-aad8-dabb53ffaee3-kube-api-access-f884v" (OuterVolumeSpecName: "kube-api-access-f884v") pod "1c2f85a5-03b7-40f7-aad8-dabb53ffaee3" (UID: "1c2f85a5-03b7-40f7-aad8-dabb53ffaee3"). InnerVolumeSpecName "kube-api-access-f884v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:37:53 crc kubenswrapper[5043]: I1125 09:37:53.233325 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f884v\" (UniqueName: \"kubernetes.io/projected/1c2f85a5-03b7-40f7-aad8-dabb53ffaee3-kube-api-access-f884v\") on node \"crc\" DevicePath \"\"" Nov 25 09:37:53 crc kubenswrapper[5043]: I1125 09:37:53.895075 5043 scope.go:117] "RemoveContainer" containerID="d8e597ba1756935a0465368db3d557f9750656487cacd3d124c8bc664c6836e9" Nov 25 09:37:53 crc kubenswrapper[5043]: I1125 09:37:53.895126 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfwzj/crc-debug-pp7kz" Nov 25 09:37:54 crc kubenswrapper[5043]: I1125 09:37:54.975200 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c2f85a5-03b7-40f7-aad8-dabb53ffaee3" path="/var/lib/kubelet/pods/1c2f85a5-03b7-40f7-aad8-dabb53ffaee3/volumes" Nov 25 09:38:00 crc kubenswrapper[5043]: I1125 09:38:00.963038 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:38:00 crc kubenswrapper[5043]: E1125 09:38:00.963789 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:38:12 crc kubenswrapper[5043]: I1125 09:38:12.313462 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ansibletest-ansibletest_3a0103c7-8a95-4675-921c-1b9b4f295df8/ansibletest-ansibletest/0.log" Nov 25 09:38:12 crc kubenswrapper[5043]: I1125 09:38:12.515460 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7d74b989db-9zq82_6c6eae10-5480-4b54-8cf5-1fd717d00c0e/barbican-api/0.log" Nov 25 09:38:12 crc kubenswrapper[5043]: I1125 09:38:12.522236 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7d74b989db-9zq82_6c6eae10-5480-4b54-8cf5-1fd717d00c0e/barbican-api-log/0.log" Nov 25 09:38:12 crc kubenswrapper[5043]: I1125 09:38:12.661872 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54895cb446-cqmz8_f63f14b8-9c07-4267-aaa0-ceac1d775c2c/barbican-keystone-listener/0.log" Nov 25 09:38:12 crc kubenswrapper[5043]: I1125 09:38:12.790648 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6b74d9cbc5-zq8tk_e706e93d-1fc1-4969-b8ba-5ff803545131/barbican-worker/0.log" Nov 25 09:38:12 crc kubenswrapper[5043]: I1125 09:38:12.936843 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6b74d9cbc5-zq8tk_e706e93d-1fc1-4969-b8ba-5ff803545131/barbican-worker-log/0.log" Nov 25 09:38:12 crc kubenswrapper[5043]: I1125 09:38:12.964395 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:38:12 crc kubenswrapper[5043]: E1125 09:38:12.964722 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:38:13 crc kubenswrapper[5043]: I1125 09:38:13.084112 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq_cd5db81c-0d4f-4c55-9539-203619adfac7/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:38:13 crc kubenswrapper[5043]: I1125 09:38:13.285439 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54895cb446-cqmz8_f63f14b8-9c07-4267-aaa0-ceac1d775c2c/barbican-keystone-listener-log/0.log" Nov 25 09:38:13 crc kubenswrapper[5043]: I1125 09:38:13.317480 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_33767a71-28b3-4d66-9f8a-4723e69cf860/ceilometer-central-agent/0.log" Nov 25 09:38:13 crc kubenswrapper[5043]: I1125 09:38:13.336505 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_33767a71-28b3-4d66-9f8a-4723e69cf860/ceilometer-notification-agent/0.log" Nov 25 09:38:13 crc kubenswrapper[5043]: I1125 09:38:13.454725 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_33767a71-28b3-4d66-9f8a-4723e69cf860/sg-core/0.log" Nov 25 09:38:13 crc kubenswrapper[5043]: I1125 09:38:13.460547 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_33767a71-28b3-4d66-9f8a-4723e69cf860/proxy-httpd/0.log" Nov 25 09:38:13 crc kubenswrapper[5043]: I1125 09:38:13.566581 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8_99201979-af70-4d71-8e55-23a89ab8c5ab/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:38:13 crc kubenswrapper[5043]: I1125 09:38:13.697959 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm_c283f059-0d72-42e3-bce6-cfdab8692e63/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:38:13 crc kubenswrapper[5043]: I1125 09:38:13.896631 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9afb63aa-ce0f-4365-a4cb-4fd593537095/cinder-api-log/0.log" Nov 25 09:38:13 crc kubenswrapper[5043]: I1125 09:38:13.912702 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9afb63aa-ce0f-4365-a4cb-4fd593537095/cinder-api/0.log" Nov 25 09:38:14 crc kubenswrapper[5043]: I1125 09:38:14.181023 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_850ff79f-0c56-4cc9-be55-a76979fc1ac8/probe/0.log" Nov 25 09:38:14 crc kubenswrapper[5043]: I1125 09:38:14.250641 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_850ff79f-0c56-4cc9-be55-a76979fc1ac8/cinder-backup/0.log" Nov 25 09:38:14 crc kubenswrapper[5043]: I1125 09:38:14.265860 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c99eb43d-cf17-44a4-beeb-f5222c978039/cinder-scheduler/0.log" Nov 25 09:38:14 crc kubenswrapper[5043]: I1125 09:38:14.540071 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_c15deb32-5994-4bf0-bd30-1a309d58f82c/cinder-volume/0.log" Nov 25 09:38:14 crc kubenswrapper[5043]: I1125 09:38:14.550265 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c99eb43d-cf17-44a4-beeb-f5222c978039/probe/0.log" Nov 25 09:38:14 crc kubenswrapper[5043]: I1125 09:38:14.594439 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_c15deb32-5994-4bf0-bd30-1a309d58f82c/probe/0.log" Nov 25 09:38:14 crc kubenswrapper[5043]: I1125 09:38:14.930710 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8_5c2779ae-e706-494e-9b9a-155774a61d31/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:38:15 crc kubenswrapper[5043]: I1125 09:38:15.018041 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-v49d9_067ff64a-f49c-4ca8-8c50-f49e2886a445/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:38:15 crc kubenswrapper[5043]: I1125 09:38:15.165445 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78f48d6b7c-v7lfn_91febcbe-4fc7-4b44-b7e9-d5258e9216b5/init/0.log" Nov 25 09:38:15 crc kubenswrapper[5043]: I1125 09:38:15.431499 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c/glance-httpd/0.log" Nov 25 09:38:15 crc kubenswrapper[5043]: I1125 09:38:15.449807 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78f48d6b7c-v7lfn_91febcbe-4fc7-4b44-b7e9-d5258e9216b5/init/0.log" Nov 25 09:38:15 crc kubenswrapper[5043]: I1125 09:38:15.533478 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78f48d6b7c-v7lfn_91febcbe-4fc7-4b44-b7e9-d5258e9216b5/dnsmasq-dns/0.log" Nov 25 09:38:15 crc kubenswrapper[5043]: I1125 09:38:15.671555 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c/glance-log/0.log" Nov 25 09:38:15 crc kubenswrapper[5043]: I1125 09:38:15.738198 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_de0f9823-3037-49ba-8bbe-7384b6988f53/glance-httpd/0.log" Nov 25 09:38:15 crc kubenswrapper[5043]: I1125 09:38:15.758799 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_de0f9823-3037-49ba-8bbe-7384b6988f53/glance-log/0.log" Nov 25 09:38:16 crc kubenswrapper[5043]: I1125 09:38:16.129847 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5f67c4b5d4-f96jj_13e8a8ee-bfe8-415b-b76f-89d7d7296659/horizon/0.log" Nov 25 09:38:16 crc kubenswrapper[5043]: I1125 09:38:16.225513 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizontest-tests-horizontest_452efbe7-7e6a-4e2a-8a22-1dfa69176628/horizontest-tests-horizontest/0.log" Nov 25 09:38:16 crc kubenswrapper[5043]: I1125 09:38:16.436058 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t_fc8c648b-1e0d-4b4b-b2f2-96e64441de99/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:38:16 crc kubenswrapper[5043]: I1125 09:38:16.674053 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-zp24v_218f287e-331e-49cc-8099-2791fb43a2ac/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:38:16 crc kubenswrapper[5043]: I1125 09:38:16.854722 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29400961-6qx6s_e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4/keystone-cron/0.log" Nov 25 09:38:17 crc kubenswrapper[5043]: I1125 09:38:17.092262 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401021-96f9v_ea3e940a-0a27-41f2-8629-e54ee65e138d/keystone-cron/0.log" Nov 25 09:38:17 crc kubenswrapper[5043]: I1125 09:38:17.130986 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c937dff6-4203-455c-b07a-ec16e23c746f/kube-state-metrics/3.log" Nov 25 09:38:17 crc kubenswrapper[5043]: I1125 09:38:17.266049 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c937dff6-4203-455c-b07a-ec16e23c746f/kube-state-metrics/2.log" Nov 25 09:38:17 crc kubenswrapper[5043]: I1125 09:38:17.411997 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl_e7416361-3a03-4892-9a17-36934133905d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:38:17 crc kubenswrapper[5043]: I1125 09:38:17.636418 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5f67c4b5d4-f96jj_13e8a8ee-bfe8-415b-b76f-89d7d7296659/horizon-log/0.log" Nov 25 09:38:17 crc kubenswrapper[5043]: I1125 09:38:17.713282 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_40b1194f-e610-4ee5-a970-281ea03cde81/manila-api-log/0.log" Nov 25 09:38:17 crc kubenswrapper[5043]: I1125 09:38:17.802753 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_40b1194f-e610-4ee5-a970-281ea03cde81/manila-api/0.log" Nov 25 09:38:17 crc kubenswrapper[5043]: I1125 09:38:17.934871 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_adca59a7-f49f-443d-9201-bf7951585f6e/probe/0.log" Nov 25 09:38:17 crc kubenswrapper[5043]: I1125 09:38:17.986728 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_adca59a7-f49f-443d-9201-bf7951585f6e/manila-scheduler/0.log" Nov 25 09:38:18 crc kubenswrapper[5043]: I1125 09:38:18.108410 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_fd0c6a41-555c-4d25-8550-8cac7501125f/manila-share/0.log" Nov 25 09:38:18 crc kubenswrapper[5043]: I1125 09:38:18.159717 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_fd0c6a41-555c-4d25-8550-8cac7501125f/probe/0.log" Nov 25 09:38:19 crc kubenswrapper[5043]: I1125 09:38:19.083329 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz_3071ef74-1c72-4b4c-90e7-fee9dc8332e5/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:38:19 crc kubenswrapper[5043]: I1125 09:38:19.833884 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-586d64c99c-q5jk2_6e087724-2bb8-47c4-9687-cd1e82fb5a1f/neutron-httpd/0.log" Nov 25 09:38:20 crc kubenswrapper[5043]: I1125 09:38:20.012993 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-79d9bc7db7-xzxqf_e8934111-2c35-4f5a-8b87-182b3fe54fdb/keystone-api/0.log" Nov 25 09:38:20 crc kubenswrapper[5043]: I1125 09:38:20.837801 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-586d64c99c-q5jk2_6e087724-2bb8-47c4-9687-cd1e82fb5a1f/neutron-api/0.log" Nov 25 09:38:21 crc kubenswrapper[5043]: I1125 09:38:21.007340 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_edfa7421-823f-4292-a033-8227024b3a40/nova-cell0-conductor-conductor/0.log" Nov 25 09:38:21 crc kubenswrapper[5043]: I1125 09:38:21.357361 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a/nova-cell1-conductor-conductor/0.log" Nov 25 09:38:21 crc kubenswrapper[5043]: I1125 09:38:21.838415 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_13b17d1b-5e8d-4b80-a15c-be8d4458cf6f/nova-cell1-novncproxy-novncproxy/0.log" Nov 25 09:38:21 crc kubenswrapper[5043]: I1125 09:38:21.874040 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch_ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:38:22 crc kubenswrapper[5043]: I1125 09:38:22.230452 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_990680d0-bb9d-44b9-a67a-2af274498f7c/nova-metadata-log/0.log" Nov 25 09:38:23 crc kubenswrapper[5043]: I1125 09:38:23.413729 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_be59e894-a929-4498-bee2-cf852ca1ae67/nova-scheduler-scheduler/0.log" Nov 25 09:38:23 crc kubenswrapper[5043]: I1125 09:38:23.777237 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e0bd148f-caab-423f-88d5-45392e63775d/nova-api-log/0.log" Nov 25 09:38:23 crc kubenswrapper[5043]: I1125 09:38:23.858938 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a22a0679-f2ea-46b8-88f5-d010717699d1/mysql-bootstrap/0.log" Nov 25 09:38:24 crc kubenswrapper[5043]: I1125 09:38:24.055296 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a22a0679-f2ea-46b8-88f5-d010717699d1/mysql-bootstrap/0.log" Nov 25 09:38:24 crc kubenswrapper[5043]: I1125 09:38:24.065628 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a22a0679-f2ea-46b8-88f5-d010717699d1/galera/0.log" Nov 25 09:38:24 crc kubenswrapper[5043]: I1125 09:38:24.335439 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_961b9ca6-9248-485e-9361-1e9bc78e9058/mysql-bootstrap/0.log" Nov 25 09:38:24 crc kubenswrapper[5043]: I1125 09:38:24.591061 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_961b9ca6-9248-485e-9361-1e9bc78e9058/mysql-bootstrap/0.log" Nov 25 09:38:24 crc kubenswrapper[5043]: I1125 09:38:24.623188 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_961b9ca6-9248-485e-9361-1e9bc78e9058/galera/0.log" Nov 25 09:38:24 crc kubenswrapper[5043]: I1125 09:38:24.850583 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9bcf9848-5bdf-4760-829f-a92e4015ab70/openstackclient/0.log" Nov 25 09:38:24 crc kubenswrapper[5043]: I1125 09:38:24.905572 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e0bd148f-caab-423f-88d5-45392e63775d/nova-api-api/0.log" Nov 25 09:38:25 crc kubenswrapper[5043]: I1125 09:38:25.086803 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-p45r4_b2e5aec9-7403-47d2-ad0d-40765246ed38/openstack-network-exporter/0.log" Nov 25 09:38:25 crc kubenswrapper[5043]: I1125 09:38:25.259303 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s57wr_8cb5a8c6-ad9b-4b36-8766-e67dd27797f7/ovsdb-server-init/0.log" Nov 25 09:38:25 crc kubenswrapper[5043]: I1125 09:38:25.502544 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s57wr_8cb5a8c6-ad9b-4b36-8766-e67dd27797f7/ovsdb-server-init/0.log" Nov 25 09:38:25 crc kubenswrapper[5043]: I1125 09:38:25.508743 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s57wr_8cb5a8c6-ad9b-4b36-8766-e67dd27797f7/ovs-vswitchd/0.log" Nov 25 09:38:25 crc kubenswrapper[5043]: I1125 09:38:25.542948 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s57wr_8cb5a8c6-ad9b-4b36-8766-e67dd27797f7/ovsdb-server/0.log" Nov 25 09:38:25 crc kubenswrapper[5043]: I1125 09:38:25.740787 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-pvbbc_4bd061b9-bc56-4e7f-b7eb-d12486d15712/ovn-controller/0.log" Nov 25 09:38:25 crc kubenswrapper[5043]: I1125 09:38:25.962395 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:38:25 crc kubenswrapper[5043]: E1125 09:38:25.963007 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:38:25 crc kubenswrapper[5043]: E1125 09:38:25.963565 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:38:26 crc kubenswrapper[5043]: I1125 09:38:26.191546 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ds447_cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:38:26 crc kubenswrapper[5043]: I1125 09:38:26.226198 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8a6dd4c5-d75f-4622-ba5e-1da7bfebca23/openstack-network-exporter/0.log" Nov 25 09:38:26 crc kubenswrapper[5043]: I1125 09:38:26.472307 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_568f6d22-7338-4a78-83ac-79125bd64fb9/openstack-network-exporter/0.log" Nov 25 09:38:26 crc kubenswrapper[5043]: I1125 09:38:26.475559 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8a6dd4c5-d75f-4622-ba5e-1da7bfebca23/ovn-northd/0.log" Nov 25 09:38:26 crc kubenswrapper[5043]: I1125 09:38:26.719137 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_610ebd16-bde0-4b4b-acf4-6d15e0324fd6/openstack-network-exporter/0.log" Nov 25 09:38:26 crc kubenswrapper[5043]: I1125 09:38:26.722435 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_568f6d22-7338-4a78-83ac-79125bd64fb9/ovsdbserver-nb/0.log" Nov 25 09:38:26 crc kubenswrapper[5043]: I1125 09:38:26.799063 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_990680d0-bb9d-44b9-a67a-2af274498f7c/nova-metadata-metadata/0.log" Nov 25 09:38:26 crc kubenswrapper[5043]: I1125 09:38:26.945196 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_610ebd16-bde0-4b4b-acf4-6d15e0324fd6/ovsdbserver-sb/0.log" Nov 25 09:38:27 crc kubenswrapper[5043]: I1125 09:38:27.296937 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_96b0381f-3d56-49b8-8a21-0b8c1bd593c2/setup-container/0.log" Nov 25 09:38:27 crc kubenswrapper[5043]: I1125 09:38:27.435796 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_96b0381f-3d56-49b8-8a21-0b8c1bd593c2/setup-container/0.log" Nov 25 09:38:27 crc kubenswrapper[5043]: I1125 09:38:27.516033 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_96b0381f-3d56-49b8-8a21-0b8c1bd593c2/rabbitmq/0.log" Nov 25 09:38:27 crc kubenswrapper[5043]: I1125 09:38:27.606613 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-78c76fd9c4-8nvkz_8ae8142b-9631-41b8-94ea-cad294cf0fbf/placement-api/0.log" Nov 25 09:38:27 crc kubenswrapper[5043]: I1125 09:38:27.750484 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-78c76fd9c4-8nvkz_8ae8142b-9631-41b8-94ea-cad294cf0fbf/placement-log/0.log" Nov 25 09:38:27 crc kubenswrapper[5043]: I1125 09:38:27.843541 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_edfb7fa8-5582-4faa-9cb2-fbdfffa12d18/setup-container/0.log" Nov 25 09:38:28 crc kubenswrapper[5043]: I1125 09:38:28.111541 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_edfb7fa8-5582-4faa-9cb2-fbdfffa12d18/setup-container/0.log" Nov 25 09:38:28 crc kubenswrapper[5043]: I1125 09:38:28.133492 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_edfb7fa8-5582-4faa-9cb2-fbdfffa12d18/rabbitmq/0.log" Nov 25 09:38:28 crc kubenswrapper[5043]: I1125 09:38:28.174062 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq_dd890a18-8d10-41bb-bd31-5e10dc9c3752/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:38:28 crc kubenswrapper[5043]: I1125 09:38:28.342561 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5_ff6ea425-8f08-4513-a444-ff524369c066/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:38:28 crc kubenswrapper[5043]: I1125 09:38:28.418859 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-kd7gq_5552c355-eb5a-4242-b79d-f9e1962c31f1/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:38:28 crc kubenswrapper[5043]: I1125 09:38:28.701845 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-khlgq_63717215-03b7-4e3c-9224-004f5e3b8cfe/ssh-known-hosts-edpm-deployment/0.log" Nov 25 09:38:28 crc kubenswrapper[5043]: I1125 09:38:28.817637 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-full_6515f5fe-fd1f-4786-8374-8af7b394831b/tempest-tests-tempest-tests-runner/0.log" Nov 25 09:38:28 crc kubenswrapper[5043]: I1125 09:38:28.996855 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-test_78329cf6-223a-4efb-9b86-1bc180f80cb1/tempest-tests-tempest-tests-runner/0.log" Nov 25 09:38:29 crc kubenswrapper[5043]: I1125 09:38:29.098158 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-ansibletest-ansibletest-ansibletest_028859ac-e6df-4f39-bd2c-8b884c7c378e/test-operator-logs-container/0.log" Nov 25 09:38:29 crc kubenswrapper[5043]: I1125 09:38:29.273726 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-horizontest-horizontest-tests-horizontest_04e243c7-381a-4239-b3db-881eed1db744/test-operator-logs-container/0.log" Nov 25 09:38:29 crc kubenswrapper[5043]: I1125 09:38:29.568895 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_64c84afc-13c0-4c4e-a82d-e3c9f7014388/test-operator-logs-container/0.log" Nov 25 09:38:29 crc kubenswrapper[5043]: I1125 09:38:29.592552 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tobiko-tobiko-tests-tobiko_9a0d0afc-d78f-4156-8d03-e50b825c0cd0/test-operator-logs-container/0.log" Nov 25 09:38:29 crc kubenswrapper[5043]: I1125 09:38:29.735832 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s00-podified-functional_13b66c56-5aa0-42ff-a574-26ec881f2e64/tobiko-tests-tobiko/0.log" Nov 25 09:38:29 crc kubenswrapper[5043]: I1125 09:38:29.842138 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s01-sanity_d53359ac-2451-479a-bd73-bec83fc39a47/tobiko-tests-tobiko/0.log" Nov 25 09:38:30 crc kubenswrapper[5043]: I1125 09:38:30.154660 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5_f22dd652-56fe-432d-a66f-806586c1c352/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:38:32 crc kubenswrapper[5043]: I1125 09:38:32.845123 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3077d275-063c-4a4d-97bf-b1b006e32f6f/memcached/0.log" Nov 25 09:38:37 crc kubenswrapper[5043]: I1125 09:38:37.963121 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:38:37 crc kubenswrapper[5043]: E1125 09:38:37.963962 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:38:49 crc kubenswrapper[5043]: I1125 09:38:49.963059 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:38:49 crc kubenswrapper[5043]: E1125 09:38:49.964058 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:38:52 crc kubenswrapper[5043]: I1125 09:38:52.061270 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-dtcj4_d9a368e6-f4bb-4896-9a2d-f7ceed65e933/kube-rbac-proxy/0.log" Nov 25 09:38:52 crc kubenswrapper[5043]: I1125 09:38:52.114741 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-dtcj4_d9a368e6-f4bb-4896-9a2d-f7ceed65e933/manager/3.log" Nov 25 09:38:52 crc kubenswrapper[5043]: I1125 09:38:52.255950 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-dtcj4_d9a368e6-f4bb-4896-9a2d-f7ceed65e933/manager/2.log" Nov 25 09:38:52 crc kubenswrapper[5043]: I1125 09:38:52.276715 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb_ee5be134-f74b-42f1-b99e-7ec2690c99c4/util/0.log" Nov 25 09:38:52 crc kubenswrapper[5043]: I1125 09:38:52.457225 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb_ee5be134-f74b-42f1-b99e-7ec2690c99c4/util/0.log" Nov 25 09:38:52 crc kubenswrapper[5043]: I1125 09:38:52.470592 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb_ee5be134-f74b-42f1-b99e-7ec2690c99c4/pull/0.log" Nov 25 09:38:52 crc kubenswrapper[5043]: I1125 09:38:52.485698 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb_ee5be134-f74b-42f1-b99e-7ec2690c99c4/pull/0.log" Nov 25 09:38:52 crc kubenswrapper[5043]: I1125 09:38:52.706890 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb_ee5be134-f74b-42f1-b99e-7ec2690c99c4/pull/0.log" Nov 25 09:38:52 crc kubenswrapper[5043]: I1125 09:38:52.712986 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb_ee5be134-f74b-42f1-b99e-7ec2690c99c4/util/0.log" Nov 25 09:38:52 crc kubenswrapper[5043]: I1125 09:38:52.720067 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb_ee5be134-f74b-42f1-b99e-7ec2690c99c4/extract/0.log" Nov 25 09:38:52 crc kubenswrapper[5043]: I1125 09:38:52.877722 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-pnq4k_cdc9a1bf-b6d9-4a36-bcf8-55f87525da45/manager/3.log" Nov 25 09:38:52 crc kubenswrapper[5043]: I1125 09:38:52.886110 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-pnq4k_cdc9a1bf-b6d9-4a36-bcf8-55f87525da45/kube-rbac-proxy/0.log" Nov 25 09:38:52 crc kubenswrapper[5043]: I1125 09:38:52.909574 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-pnq4k_cdc9a1bf-b6d9-4a36-bcf8-55f87525da45/manager/2.log" Nov 25 09:38:53 crc kubenswrapper[5043]: I1125 09:38:53.076250 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-5mp5h_e020a857-3730-44f5-8e98-3e59868fbde6/manager/3.log" Nov 25 09:38:53 crc kubenswrapper[5043]: I1125 09:38:53.101682 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-5mp5h_e020a857-3730-44f5-8e98-3e59868fbde6/kube-rbac-proxy/0.log" Nov 25 09:38:53 crc kubenswrapper[5043]: I1125 09:38:53.115642 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-5mp5h_e020a857-3730-44f5-8e98-3e59868fbde6/manager/2.log" Nov 25 09:38:53 crc kubenswrapper[5043]: I1125 09:38:53.280517 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-nnpzz_e5c62587-28b4-4a1e-8b73-ee9624ca7163/kube-rbac-proxy/0.log" Nov 25 09:38:53 crc kubenswrapper[5043]: I1125 09:38:53.296648 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-nnpzz_e5c62587-28b4-4a1e-8b73-ee9624ca7163/manager/3.log" Nov 25 09:38:53 crc kubenswrapper[5043]: I1125 09:38:53.328833 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-nnpzz_e5c62587-28b4-4a1e-8b73-ee9624ca7163/manager/2.log" Nov 25 09:38:53 crc kubenswrapper[5043]: I1125 09:38:53.471746 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-l77gb_8a93d5b1-742c-4a37-94ef-a60ffb008520/kube-rbac-proxy/0.log" Nov 25 09:38:53 crc kubenswrapper[5043]: I1125 09:38:53.508342 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-l77gb_8a93d5b1-742c-4a37-94ef-a60ffb008520/manager/3.log" Nov 25 09:38:53 crc kubenswrapper[5043]: I1125 09:38:53.530208 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-l77gb_8a93d5b1-742c-4a37-94ef-a60ffb008520/manager/2.log" Nov 25 09:38:53 crc kubenswrapper[5043]: I1125 09:38:53.705114 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-wmkmw_c20803a7-e9a9-441a-9e61-84673f3c02e8/kube-rbac-proxy/0.log" Nov 25 09:38:53 crc kubenswrapper[5043]: I1125 09:38:53.714084 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-wmkmw_c20803a7-e9a9-441a-9e61-84673f3c02e8/manager/2.log" Nov 25 09:38:53 crc kubenswrapper[5043]: I1125 09:38:53.742025 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-wmkmw_c20803a7-e9a9-441a-9e61-84673f3c02e8/manager/3.log" Nov 25 09:38:53 crc kubenswrapper[5043]: I1125 09:38:53.883245 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-x8q8x_92e57762-522f-4a9d-8b03-732ba4dad5c1/kube-rbac-proxy/0.log" Nov 25 09:38:53 crc kubenswrapper[5043]: I1125 09:38:53.918562 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-x8q8x_92e57762-522f-4a9d-8b03-732ba4dad5c1/manager/2.log" Nov 25 09:38:53 crc kubenswrapper[5043]: I1125 09:38:53.942713 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-x8q8x_92e57762-522f-4a9d-8b03-732ba4dad5c1/manager/1.log" Nov 25 09:38:54 crc kubenswrapper[5043]: I1125 09:38:54.067432 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-sgz96_b7005e58-64d2-470b-a3e7-22b67b7fbfb3/kube-rbac-proxy/0.log" Nov 25 09:38:54 crc kubenswrapper[5043]: I1125 09:38:54.138046 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-sgz96_b7005e58-64d2-470b-a3e7-22b67b7fbfb3/manager/3.log" Nov 25 09:38:54 crc kubenswrapper[5043]: I1125 09:38:54.139767 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-sgz96_b7005e58-64d2-470b-a3e7-22b67b7fbfb3/manager/2.log" Nov 25 09:38:54 crc kubenswrapper[5043]: I1125 09:38:54.267873 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-gvwj8_ff874d31-8e5a-4c0b-8f9c-e63513a00483/kube-rbac-proxy/0.log" Nov 25 09:38:54 crc kubenswrapper[5043]: I1125 09:38:54.346949 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-gvwj8_ff874d31-8e5a-4c0b-8f9c-e63513a00483/manager/3.log" Nov 25 09:38:54 crc kubenswrapper[5043]: I1125 09:38:54.422617 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-gvwj8_ff874d31-8e5a-4c0b-8f9c-e63513a00483/manager/2.log" Nov 25 09:38:54 crc kubenswrapper[5043]: I1125 09:38:54.537339 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-xx8rb_c924fa47-53fb-4edc-8214-667ba1858ca2/kube-rbac-proxy/0.log" Nov 25 09:38:54 crc kubenswrapper[5043]: I1125 09:38:54.559488 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-xx8rb_c924fa47-53fb-4edc-8214-667ba1858ca2/manager/3.log" Nov 25 09:38:54 crc kubenswrapper[5043]: I1125 09:38:54.607876 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-xx8rb_c924fa47-53fb-4edc-8214-667ba1858ca2/manager/2.log" Nov 25 09:38:54 crc kubenswrapper[5043]: I1125 09:38:54.741099 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-tdzr2_9c9e4471-0205-478a-8717-be36a19d2a02/kube-rbac-proxy/0.log" Nov 25 09:38:54 crc kubenswrapper[5043]: I1125 09:38:54.795253 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-tdzr2_9c9e4471-0205-478a-8717-be36a19d2a02/manager/2.log" Nov 25 09:38:54 crc kubenswrapper[5043]: I1125 09:38:54.885087 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-tdzr2_9c9e4471-0205-478a-8717-be36a19d2a02/manager/1.log" Nov 25 09:38:55 crc kubenswrapper[5043]: I1125 09:38:55.056711 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-l5vz2_bb800a2f-1864-47be-931b-7b99f7c7354f/kube-rbac-proxy/0.log" Nov 25 09:38:55 crc kubenswrapper[5043]: I1125 09:38:55.088286 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-l5vz2_bb800a2f-1864-47be-931b-7b99f7c7354f/manager/2.log" Nov 25 09:38:55 crc kubenswrapper[5043]: I1125 09:38:55.190079 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-l5vz2_bb800a2f-1864-47be-931b-7b99f7c7354f/manager/1.log" Nov 25 09:38:55 crc kubenswrapper[5043]: I1125 09:38:55.240144 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-m9bmz_a3d7b5dc-2ced-4ac6-bdad-cd86342616a8/kube-rbac-proxy/0.log" Nov 25 09:38:55 crc kubenswrapper[5043]: I1125 09:38:55.316422 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-m9bmz_a3d7b5dc-2ced-4ac6-bdad-cd86342616a8/manager/2.log" Nov 25 09:38:55 crc kubenswrapper[5043]: I1125 09:38:55.405230 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-m9bmz_a3d7b5dc-2ced-4ac6-bdad-cd86342616a8/manager/1.log" Nov 25 09:38:55 crc kubenswrapper[5043]: I1125 09:38:55.461762 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-dxd2x_020c7247-0b68-419b-b97f-f7b0ea800142/kube-rbac-proxy/0.log" Nov 25 09:38:55 crc kubenswrapper[5043]: I1125 09:38:55.542984 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-dxd2x_020c7247-0b68-419b-b97f-f7b0ea800142/manager/2.log" Nov 25 09:38:55 crc kubenswrapper[5043]: I1125 09:38:55.611419 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-dxd2x_020c7247-0b68-419b-b97f-f7b0ea800142/manager/1.log" Nov 25 09:38:55 crc kubenswrapper[5043]: I1125 09:38:55.681218 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-b58f89467-h9jgk_c0627b3a-26de-453c-ab7f-de79dae6c2fc/kube-rbac-proxy/0.log" Nov 25 09:38:55 crc kubenswrapper[5043]: I1125 09:38:55.727735 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-b58f89467-h9jgk_c0627b3a-26de-453c-ab7f-de79dae6c2fc/manager/1.log" Nov 25 09:38:55 crc kubenswrapper[5043]: I1125 09:38:55.876302 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-b58f89467-h9jgk_c0627b3a-26de-453c-ab7f-de79dae6c2fc/manager/0.log" Nov 25 09:38:55 crc kubenswrapper[5043]: I1125 09:38:55.941357 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cd5954d9-5zklz_f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4/manager/2.log" Nov 25 09:38:56 crc kubenswrapper[5043]: I1125 09:38:56.013210 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cd5954d9-5zklz_f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4/manager/1.log" Nov 25 09:38:56 crc kubenswrapper[5043]: I1125 09:38:56.154554 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-54c548f75b-mk6ml_d845a43d-ee06-454f-b68d-cdb949cecffe/operator/0.log" Nov 25 09:38:56 crc kubenswrapper[5043]: I1125 09:38:56.202511 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-54c548f75b-mk6ml_d845a43d-ee06-454f-b68d-cdb949cecffe/operator/1.log" Nov 25 09:38:56 crc kubenswrapper[5043]: I1125 09:38:56.358300 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-vl25g_d14cb4f9-dc65-4999-833a-475d3f735715/registry-server/0.log" Nov 25 09:38:56 crc kubenswrapper[5043]: I1125 09:38:56.428054 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-d5ffq_d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2/kube-rbac-proxy/0.log" Nov 25 09:38:56 crc kubenswrapper[5043]: I1125 09:38:56.499785 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-d5ffq_d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2/manager/1.log" Nov 25 09:38:56 crc kubenswrapper[5043]: I1125 09:38:56.504842 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-d5ffq_d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2/manager/2.log" Nov 25 09:38:56 crc kubenswrapper[5043]: I1125 09:38:56.649872 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-6w2db_869f93a1-d6e7-46ff-a60f-0e997412a2fa/kube-rbac-proxy/0.log" Nov 25 09:38:56 crc kubenswrapper[5043]: I1125 09:38:56.748097 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-6w2db_869f93a1-d6e7-46ff-a60f-0e997412a2fa/manager/1.log" Nov 25 09:38:56 crc kubenswrapper[5043]: I1125 09:38:56.768390 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-6w2db_869f93a1-d6e7-46ff-a60f-0e997412a2fa/manager/2.log" Nov 25 09:38:56 crc kubenswrapper[5043]: I1125 09:38:56.851034 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fmplr_6411a018-19de-4fba-bf72-6dfd5bd2ce29/operator/2.log" Nov 25 09:38:56 crc kubenswrapper[5043]: I1125 09:38:56.886290 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fmplr_6411a018-19de-4fba-bf72-6dfd5bd2ce29/operator/3.log" Nov 25 09:38:57 crc kubenswrapper[5043]: I1125 09:38:57.004061 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-gmcsx_8ea2c827-762f-437d-ad30-a3568d7a4af1/kube-rbac-proxy/0.log" Nov 25 09:38:57 crc kubenswrapper[5043]: I1125 09:38:57.006175 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-gmcsx_8ea2c827-762f-437d-ad30-a3568d7a4af1/manager/2.log" Nov 25 09:38:57 crc kubenswrapper[5043]: I1125 09:38:57.036421 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-gmcsx_8ea2c827-762f-437d-ad30-a3568d7a4af1/manager/1.log" Nov 25 09:38:57 crc kubenswrapper[5043]: I1125 09:38:57.189181 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-mk7wm_d643e47d-246d-4551-a63c-9b9374e684b2/kube-rbac-proxy/0.log" Nov 25 09:38:57 crc kubenswrapper[5043]: I1125 09:38:57.301192 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-mk7wm_d643e47d-246d-4551-a63c-9b9374e684b2/manager/1.log" Nov 25 09:38:57 crc kubenswrapper[5043]: I1125 09:38:57.306422 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-mk7wm_d643e47d-246d-4551-a63c-9b9374e684b2/manager/2.log" Nov 25 09:38:57 crc kubenswrapper[5043]: I1125 09:38:57.468849 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-556c5c9c9c-82qgw_8cfc66d8-27da-4bce-9a5f-62a019bfd836/manager/0.log" Nov 25 09:38:57 crc kubenswrapper[5043]: I1125 09:38:57.472146 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-556c5c9c9c-82qgw_8cfc66d8-27da-4bce-9a5f-62a019bfd836/kube-rbac-proxy/0.log" Nov 25 09:38:57 crc kubenswrapper[5043]: I1125 09:38:57.501103 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-556c5c9c9c-82qgw_8cfc66d8-27da-4bce-9a5f-62a019bfd836/manager/1.log" Nov 25 09:38:57 crc kubenswrapper[5043]: I1125 09:38:57.572347 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-54g5x_17e00d26-c8ad-4dfd-90df-8705b2cb2bde/kube-rbac-proxy/0.log" Nov 25 09:38:57 crc kubenswrapper[5043]: I1125 09:38:57.680446 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-54g5x_17e00d26-c8ad-4dfd-90df-8705b2cb2bde/manager/1.log" Nov 25 09:38:57 crc kubenswrapper[5043]: I1125 09:38:57.751428 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-54g5x_17e00d26-c8ad-4dfd-90df-8705b2cb2bde/manager/2.log" Nov 25 09:39:00 crc kubenswrapper[5043]: I1125 09:39:00.963494 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:39:00 crc kubenswrapper[5043]: E1125 09:39:00.964440 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:39:14 crc kubenswrapper[5043]: I1125 09:39:14.963593 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:39:14 crc kubenswrapper[5043]: E1125 09:39:14.964368 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:39:15 crc kubenswrapper[5043]: I1125 09:39:15.704486 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-gnr9c_f110819e-9e33-4cf3-85b0-b92eaaaa223b/control-plane-machine-set-operator/0.log" Nov 25 09:39:15 crc kubenswrapper[5043]: I1125 09:39:15.910400 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-l89zw_d00aa552-e700-4bec-9818-3084ac601a92/machine-api-operator/0.log" Nov 25 09:39:15 crc kubenswrapper[5043]: I1125 09:39:15.920682 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-l89zw_d00aa552-e700-4bec-9818-3084ac601a92/kube-rbac-proxy/0.log" Nov 25 09:39:27 crc kubenswrapper[5043]: I1125 09:39:27.963130 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:39:27 crc kubenswrapper[5043]: E1125 09:39:27.964211 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:39:28 crc kubenswrapper[5043]: I1125 09:39:28.560149 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-42sd8_00a5ef16-fb0d-4b68-b3aa-92411430aebd/cert-manager-controller/0.log" Nov 25 09:39:28 crc kubenswrapper[5043]: I1125 09:39:28.716030 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-9lxcm_94518a18-995b-490b-8099-917d5e510ad0/cert-manager-cainjector/0.log" Nov 25 09:39:28 crc kubenswrapper[5043]: I1125 09:39:28.764247 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-9lxcm_94518a18-995b-490b-8099-917d5e510ad0/cert-manager-cainjector/1.log" Nov 25 09:39:28 crc kubenswrapper[5043]: I1125 09:39:28.767895 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-qn5hq_6c845b4b-10ec-41bc-8482-14da0da21a03/cert-manager-webhook/0.log" Nov 25 09:39:34 crc kubenswrapper[5043]: E1125 09:39:34.963331 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:39:41 crc kubenswrapper[5043]: I1125 09:39:41.266357 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-8rr2k_484bc3f6-cd90-415a-99d9-0496929f73f7/nmstate-console-plugin/0.log" Nov 25 09:39:41 crc kubenswrapper[5043]: I1125 09:39:41.439904 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ddx45_eca9619f-360b-466d-9413-cce43ac0e5de/nmstate-handler/0.log" Nov 25 09:39:41 crc kubenswrapper[5043]: I1125 09:39:41.475426 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-jcbrg_9cece50c-ecd0-4349-8c5f-26d814c988c0/kube-rbac-proxy/0.log" Nov 25 09:39:41 crc kubenswrapper[5043]: I1125 09:39:41.516386 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-jcbrg_9cece50c-ecd0-4349-8c5f-26d814c988c0/nmstate-metrics/0.log" Nov 25 09:39:41 crc kubenswrapper[5043]: I1125 09:39:41.709380 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-q7kcp_c86d9095-02e1-450f-9d00-b448049035b1/nmstate-operator/0.log" Nov 25 09:39:41 crc kubenswrapper[5043]: I1125 09:39:41.772070 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-rk77g_cfab8bc8-7fd5-4a73-a58a-e92b3ad46845/nmstate-webhook/0.log" Nov 25 09:39:41 crc kubenswrapper[5043]: I1125 09:39:41.964057 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:39:41 crc kubenswrapper[5043]: E1125 09:39:41.964349 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:39:56 crc kubenswrapper[5043]: I1125 09:39:56.970390 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:39:56 crc kubenswrapper[5043]: E1125 09:39:56.971340 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:39:57 crc kubenswrapper[5043]: I1125 09:39:57.475398 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-fkqdz_e214977c-6456-4990-b061-b88f5a127836/kube-rbac-proxy/0.log" Nov 25 09:39:57 crc kubenswrapper[5043]: I1125 09:39:57.681026 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-frr-files/0.log" Nov 25 09:39:57 crc kubenswrapper[5043]: I1125 09:39:57.705903 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-fkqdz_e214977c-6456-4990-b061-b88f5a127836/controller/0.log" Nov 25 09:39:58 crc kubenswrapper[5043]: I1125 09:39:58.082406 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-frr-files/0.log" Nov 25 09:39:58 crc kubenswrapper[5043]: I1125 09:39:58.105973 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-reloader/0.log" Nov 25 09:39:58 crc kubenswrapper[5043]: I1125 09:39:58.120531 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-reloader/0.log" Nov 25 09:39:58 crc kubenswrapper[5043]: I1125 09:39:58.133740 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-metrics/0.log" Nov 25 09:39:58 crc kubenswrapper[5043]: I1125 09:39:58.268918 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-frr-files/0.log" Nov 25 09:39:58 crc kubenswrapper[5043]: I1125 09:39:58.310488 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-reloader/0.log" Nov 25 09:39:58 crc kubenswrapper[5043]: I1125 09:39:58.312662 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-metrics/0.log" Nov 25 09:39:58 crc kubenswrapper[5043]: I1125 09:39:58.330699 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-metrics/0.log" Nov 25 09:39:58 crc kubenswrapper[5043]: I1125 09:39:58.563799 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-reloader/0.log" Nov 25 09:39:58 crc kubenswrapper[5043]: I1125 09:39:58.588890 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-frr-files/0.log" Nov 25 09:39:58 crc kubenswrapper[5043]: I1125 09:39:58.601371 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-metrics/0.log" Nov 25 09:39:58 crc kubenswrapper[5043]: I1125 09:39:58.604182 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/controller/0.log" Nov 25 09:39:58 crc kubenswrapper[5043]: I1125 09:39:58.798366 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/frr-metrics/0.log" Nov 25 09:39:58 crc kubenswrapper[5043]: I1125 09:39:58.838670 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/kube-rbac-proxy/0.log" Nov 25 09:39:58 crc kubenswrapper[5043]: I1125 09:39:58.895791 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/kube-rbac-proxy-frr/0.log" Nov 25 09:39:59 crc kubenswrapper[5043]: I1125 09:39:59.051285 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/reloader/0.log" Nov 25 09:39:59 crc kubenswrapper[5043]: I1125 09:39:59.152398 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-4mwjl_4d08af94-ced7-41f2-a5da-4a5ab09436bb/frr-k8s-webhook-server/0.log" Nov 25 09:39:59 crc kubenswrapper[5043]: I1125 09:39:59.342477 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-85bdd6cc97-lrkkr_cdbab2e0-494c-4845-a500-88b26934f1c7/manager/3.log" Nov 25 09:39:59 crc kubenswrapper[5043]: I1125 09:39:59.387871 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-85bdd6cc97-lrkkr_cdbab2e0-494c-4845-a500-88b26934f1c7/manager/2.log" Nov 25 09:39:59 crc kubenswrapper[5043]: I1125 09:39:59.619817 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-687d746769-dbszt_d592d149-d73b-4db0-a83f-81fdb776420a/webhook-server/0.log" Nov 25 09:39:59 crc kubenswrapper[5043]: I1125 09:39:59.863087 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8sqcm_f6fde8c1-7722-4081-ae09-6f0cf5af35c4/kube-rbac-proxy/0.log" Nov 25 09:40:00 crc kubenswrapper[5043]: I1125 09:40:00.564531 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8sqcm_f6fde8c1-7722-4081-ae09-6f0cf5af35c4/speaker/0.log" Nov 25 09:40:00 crc kubenswrapper[5043]: I1125 09:40:00.902205 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/frr/0.log" Nov 25 09:40:09 crc kubenswrapper[5043]: I1125 09:40:09.964468 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:40:09 crc kubenswrapper[5043]: E1125 09:40:09.966297 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:40:12 crc kubenswrapper[5043]: I1125 09:40:12.585068 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v_51d2d2e9-ab00-458f-b284-965e99abbdb3/util/0.log" Nov 25 09:40:12 crc kubenswrapper[5043]: I1125 09:40:12.750658 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v_51d2d2e9-ab00-458f-b284-965e99abbdb3/pull/0.log" Nov 25 09:40:12 crc kubenswrapper[5043]: I1125 09:40:12.796176 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v_51d2d2e9-ab00-458f-b284-965e99abbdb3/util/0.log" Nov 25 09:40:12 crc kubenswrapper[5043]: I1125 09:40:12.862074 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v_51d2d2e9-ab00-458f-b284-965e99abbdb3/pull/0.log" Nov 25 09:40:12 crc kubenswrapper[5043]: I1125 09:40:12.992401 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v_51d2d2e9-ab00-458f-b284-965e99abbdb3/util/0.log" Nov 25 09:40:13 crc kubenswrapper[5043]: I1125 09:40:13.005867 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v_51d2d2e9-ab00-458f-b284-965e99abbdb3/pull/0.log" Nov 25 09:40:13 crc kubenswrapper[5043]: I1125 09:40:13.022064 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v_51d2d2e9-ab00-458f-b284-965e99abbdb3/extract/0.log" Nov 25 09:40:13 crc kubenswrapper[5043]: I1125 09:40:13.158117 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bn6tl_e0b8135e-a7a5-462c-9ba8-0d36b6778807/extract-utilities/0.log" Nov 25 09:40:13 crc kubenswrapper[5043]: I1125 09:40:13.344622 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bn6tl_e0b8135e-a7a5-462c-9ba8-0d36b6778807/extract-content/0.log" Nov 25 09:40:13 crc kubenswrapper[5043]: I1125 09:40:13.350699 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bn6tl_e0b8135e-a7a5-462c-9ba8-0d36b6778807/extract-content/0.log" Nov 25 09:40:13 crc kubenswrapper[5043]: I1125 09:40:13.355793 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bn6tl_e0b8135e-a7a5-462c-9ba8-0d36b6778807/extract-utilities/0.log" Nov 25 09:40:13 crc kubenswrapper[5043]: I1125 09:40:13.506720 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bn6tl_e0b8135e-a7a5-462c-9ba8-0d36b6778807/extract-utilities/0.log" Nov 25 09:40:13 crc kubenswrapper[5043]: I1125 09:40:13.564272 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bn6tl_e0b8135e-a7a5-462c-9ba8-0d36b6778807/extract-content/0.log" Nov 25 09:40:13 crc kubenswrapper[5043]: I1125 09:40:13.775398 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6sqfs_c5441050-a90f-49f4-89a4-c40076857f5e/extract-utilities/0.log" Nov 25 09:40:13 crc kubenswrapper[5043]: I1125 09:40:13.911829 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bn6tl_e0b8135e-a7a5-462c-9ba8-0d36b6778807/registry-server/0.log" Nov 25 09:40:13 crc kubenswrapper[5043]: I1125 09:40:13.951669 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6sqfs_c5441050-a90f-49f4-89a4-c40076857f5e/extract-utilities/0.log" Nov 25 09:40:13 crc kubenswrapper[5043]: I1125 09:40:13.975656 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6sqfs_c5441050-a90f-49f4-89a4-c40076857f5e/extract-content/0.log" Nov 25 09:40:14 crc kubenswrapper[5043]: I1125 09:40:14.039940 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6sqfs_c5441050-a90f-49f4-89a4-c40076857f5e/extract-content/0.log" Nov 25 09:40:14 crc kubenswrapper[5043]: I1125 09:40:14.207201 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6sqfs_c5441050-a90f-49f4-89a4-c40076857f5e/extract-utilities/0.log" Nov 25 09:40:14 crc kubenswrapper[5043]: I1125 09:40:14.218532 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6sqfs_c5441050-a90f-49f4-89a4-c40076857f5e/extract-content/0.log" Nov 25 09:40:14 crc kubenswrapper[5043]: I1125 09:40:14.447843 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6_75d56d2d-27c2-4a6d-9f9f-3975af3a6bed/util/0.log" Nov 25 09:40:14 crc kubenswrapper[5043]: I1125 09:40:14.676678 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6_75d56d2d-27c2-4a6d-9f9f-3975af3a6bed/pull/0.log" Nov 25 09:40:14 crc kubenswrapper[5043]: I1125 09:40:14.703337 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6_75d56d2d-27c2-4a6d-9f9f-3975af3a6bed/pull/0.log" Nov 25 09:40:14 crc kubenswrapper[5043]: I1125 09:40:14.757810 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6_75d56d2d-27c2-4a6d-9f9f-3975af3a6bed/util/0.log" Nov 25 09:40:14 crc kubenswrapper[5043]: I1125 09:40:14.964666 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6_75d56d2d-27c2-4a6d-9f9f-3975af3a6bed/pull/0.log" Nov 25 09:40:14 crc kubenswrapper[5043]: I1125 09:40:14.974998 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6sqfs_c5441050-a90f-49f4-89a4-c40076857f5e/registry-server/0.log" Nov 25 09:40:15 crc kubenswrapper[5043]: I1125 09:40:15.012984 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6_75d56d2d-27c2-4a6d-9f9f-3975af3a6bed/util/0.log" Nov 25 09:40:15 crc kubenswrapper[5043]: I1125 09:40:15.032437 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6_75d56d2d-27c2-4a6d-9f9f-3975af3a6bed/extract/0.log" Nov 25 09:40:15 crc kubenswrapper[5043]: I1125 09:40:15.161286 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-86pjb_b3387cfc-ac92-4b17-b153-e30513638741/marketplace-operator/0.log" Nov 25 09:40:15 crc kubenswrapper[5043]: I1125 09:40:15.293266 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2t8s_72da6310-6558-476b-8cb4-32e7b6983b67/extract-utilities/0.log" Nov 25 09:40:15 crc kubenswrapper[5043]: I1125 09:40:15.440680 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2t8s_72da6310-6558-476b-8cb4-32e7b6983b67/extract-utilities/0.log" Nov 25 09:40:15 crc kubenswrapper[5043]: I1125 09:40:15.466229 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2t8s_72da6310-6558-476b-8cb4-32e7b6983b67/extract-content/0.log" Nov 25 09:40:15 crc kubenswrapper[5043]: I1125 09:40:15.481393 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2t8s_72da6310-6558-476b-8cb4-32e7b6983b67/extract-content/0.log" Nov 25 09:40:15 crc kubenswrapper[5043]: I1125 09:40:15.708256 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2t8s_72da6310-6558-476b-8cb4-32e7b6983b67/extract-utilities/0.log" Nov 25 09:40:15 crc kubenswrapper[5043]: I1125 09:40:15.724874 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2t8s_72da6310-6558-476b-8cb4-32e7b6983b67/extract-content/0.log" Nov 25 09:40:15 crc kubenswrapper[5043]: I1125 09:40:15.990882 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tm5dw_ff7e436c-9335-4145-8c68-31dd3da7d4ed/extract-utilities/0.log" Nov 25 09:40:16 crc kubenswrapper[5043]: I1125 09:40:16.171452 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tm5dw_ff7e436c-9335-4145-8c68-31dd3da7d4ed/extract-content/0.log" Nov 25 09:40:16 crc kubenswrapper[5043]: I1125 09:40:16.187019 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tm5dw_ff7e436c-9335-4145-8c68-31dd3da7d4ed/extract-utilities/0.log" Nov 25 09:40:16 crc kubenswrapper[5043]: I1125 09:40:16.239745 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tm5dw_ff7e436c-9335-4145-8c68-31dd3da7d4ed/extract-content/0.log" Nov 25 09:40:16 crc kubenswrapper[5043]: I1125 09:40:16.383242 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2t8s_72da6310-6558-476b-8cb4-32e7b6983b67/registry-server/0.log" Nov 25 09:40:16 crc kubenswrapper[5043]: I1125 09:40:16.429473 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tm5dw_ff7e436c-9335-4145-8c68-31dd3da7d4ed/extract-utilities/0.log" Nov 25 09:40:16 crc kubenswrapper[5043]: I1125 09:40:16.532559 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tm5dw_ff7e436c-9335-4145-8c68-31dd3da7d4ed/extract-content/0.log" Nov 25 09:40:17 crc kubenswrapper[5043]: I1125 09:40:17.635017 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tm5dw_ff7e436c-9335-4145-8c68-31dd3da7d4ed/registry-server/0.log" Nov 25 09:40:20 crc kubenswrapper[5043]: I1125 09:40:20.963276 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:40:20 crc kubenswrapper[5043]: E1125 09:40:20.964865 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:40:27 crc kubenswrapper[5043]: I1125 09:40:27.684780 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v858q"] Nov 25 09:40:27 crc kubenswrapper[5043]: E1125 09:40:27.685793 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c2f85a5-03b7-40f7-aad8-dabb53ffaee3" containerName="container-00" Nov 25 09:40:27 crc kubenswrapper[5043]: I1125 09:40:27.685808 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2f85a5-03b7-40f7-aad8-dabb53ffaee3" containerName="container-00" Nov 25 09:40:27 crc kubenswrapper[5043]: I1125 09:40:27.686008 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c2f85a5-03b7-40f7-aad8-dabb53ffaee3" containerName="container-00" Nov 25 09:40:27 crc kubenswrapper[5043]: I1125 09:40:27.687555 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v858q" Nov 25 09:40:27 crc kubenswrapper[5043]: I1125 09:40:27.707708 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v858q"] Nov 25 09:40:27 crc kubenswrapper[5043]: I1125 09:40:27.760474 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb20bec6-271d-44a9-9657-3d8cef23c902-utilities\") pod \"redhat-operators-v858q\" (UID: \"bb20bec6-271d-44a9-9657-3d8cef23c902\") " pod="openshift-marketplace/redhat-operators-v858q" Nov 25 09:40:27 crc kubenswrapper[5043]: I1125 09:40:27.760782 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb20bec6-271d-44a9-9657-3d8cef23c902-catalog-content\") pod \"redhat-operators-v858q\" (UID: \"bb20bec6-271d-44a9-9657-3d8cef23c902\") " pod="openshift-marketplace/redhat-operators-v858q" Nov 25 09:40:27 crc kubenswrapper[5043]: I1125 09:40:27.760890 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb7qj\" (UniqueName: \"kubernetes.io/projected/bb20bec6-271d-44a9-9657-3d8cef23c902-kube-api-access-bb7qj\") pod \"redhat-operators-v858q\" (UID: \"bb20bec6-271d-44a9-9657-3d8cef23c902\") " pod="openshift-marketplace/redhat-operators-v858q" Nov 25 09:40:27 crc kubenswrapper[5043]: I1125 09:40:27.863295 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb20bec6-271d-44a9-9657-3d8cef23c902-catalog-content\") pod \"redhat-operators-v858q\" (UID: \"bb20bec6-271d-44a9-9657-3d8cef23c902\") " pod="openshift-marketplace/redhat-operators-v858q" Nov 25 09:40:27 crc kubenswrapper[5043]: I1125 09:40:27.863364 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb7qj\" (UniqueName: \"kubernetes.io/projected/bb20bec6-271d-44a9-9657-3d8cef23c902-kube-api-access-bb7qj\") pod \"redhat-operators-v858q\" (UID: \"bb20bec6-271d-44a9-9657-3d8cef23c902\") " pod="openshift-marketplace/redhat-operators-v858q" Nov 25 09:40:27 crc kubenswrapper[5043]: I1125 09:40:27.863490 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb20bec6-271d-44a9-9657-3d8cef23c902-utilities\") pod \"redhat-operators-v858q\" (UID: \"bb20bec6-271d-44a9-9657-3d8cef23c902\") " pod="openshift-marketplace/redhat-operators-v858q" Nov 25 09:40:27 crc kubenswrapper[5043]: I1125 09:40:27.863936 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb20bec6-271d-44a9-9657-3d8cef23c902-catalog-content\") pod \"redhat-operators-v858q\" (UID: \"bb20bec6-271d-44a9-9657-3d8cef23c902\") " pod="openshift-marketplace/redhat-operators-v858q" Nov 25 09:40:27 crc kubenswrapper[5043]: I1125 09:40:27.863999 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb20bec6-271d-44a9-9657-3d8cef23c902-utilities\") pod \"redhat-operators-v858q\" (UID: \"bb20bec6-271d-44a9-9657-3d8cef23c902\") " pod="openshift-marketplace/redhat-operators-v858q" Nov 25 09:40:27 crc kubenswrapper[5043]: I1125 09:40:27.883786 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb7qj\" (UniqueName: \"kubernetes.io/projected/bb20bec6-271d-44a9-9657-3d8cef23c902-kube-api-access-bb7qj\") pod \"redhat-operators-v858q\" (UID: \"bb20bec6-271d-44a9-9657-3d8cef23c902\") " pod="openshift-marketplace/redhat-operators-v858q" Nov 25 09:40:28 crc kubenswrapper[5043]: I1125 09:40:28.010902 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v858q" Nov 25 09:40:28 crc kubenswrapper[5043]: W1125 09:40:28.552946 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb20bec6_271d_44a9_9657_3d8cef23c902.slice/crio-a462f75cfef16ef417b7fd9a30f017ce1355800e7df40607bbc2386b3e08480d WatchSource:0}: Error finding container a462f75cfef16ef417b7fd9a30f017ce1355800e7df40607bbc2386b3e08480d: Status 404 returned error can't find the container with id a462f75cfef16ef417b7fd9a30f017ce1355800e7df40607bbc2386b3e08480d Nov 25 09:40:28 crc kubenswrapper[5043]: I1125 09:40:28.557018 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v858q"] Nov 25 09:40:29 crc kubenswrapper[5043]: I1125 09:40:29.368785 5043 generic.go:334] "Generic (PLEG): container finished" podID="bb20bec6-271d-44a9-9657-3d8cef23c902" containerID="d1518158bd89491815c17fbf98d6e3da815c2841c79622c9bf816ea68ff0feb4" exitCode=0 Nov 25 09:40:29 crc kubenswrapper[5043]: I1125 09:40:29.368870 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v858q" event={"ID":"bb20bec6-271d-44a9-9657-3d8cef23c902","Type":"ContainerDied","Data":"d1518158bd89491815c17fbf98d6e3da815c2841c79622c9bf816ea68ff0feb4"} Nov 25 09:40:29 crc kubenswrapper[5043]: I1125 09:40:29.369184 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v858q" event={"ID":"bb20bec6-271d-44a9-9657-3d8cef23c902","Type":"ContainerStarted","Data":"a462f75cfef16ef417b7fd9a30f017ce1355800e7df40607bbc2386b3e08480d"} Nov 25 09:40:29 crc kubenswrapper[5043]: I1125 09:40:29.372064 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 09:40:32 crc kubenswrapper[5043]: I1125 09:40:32.395789 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v858q" event={"ID":"bb20bec6-271d-44a9-9657-3d8cef23c902","Type":"ContainerStarted","Data":"fb3fcbda6dd6e73b384a59483af82b464aaca03c516e33c9a9b5868881f080ac"} Nov 25 09:40:35 crc kubenswrapper[5043]: I1125 09:40:35.963158 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:40:37 crc kubenswrapper[5043]: E1125 09:40:35.964018 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:40:45 crc kubenswrapper[5043]: E1125 09:40:45.963333 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:40:49 crc kubenswrapper[5043]: I1125 09:40:49.554231 5043 generic.go:334] "Generic (PLEG): container finished" podID="bb20bec6-271d-44a9-9657-3d8cef23c902" containerID="fb3fcbda6dd6e73b384a59483af82b464aaca03c516e33c9a9b5868881f080ac" exitCode=0 Nov 25 09:40:49 crc kubenswrapper[5043]: I1125 09:40:49.554303 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v858q" event={"ID":"bb20bec6-271d-44a9-9657-3d8cef23c902","Type":"ContainerDied","Data":"fb3fcbda6dd6e73b384a59483af82b464aaca03c516e33c9a9b5868881f080ac"} Nov 25 09:40:50 crc kubenswrapper[5043]: I1125 09:40:50.962790 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:40:50 crc kubenswrapper[5043]: E1125 09:40:50.963350 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:40:51 crc kubenswrapper[5043]: I1125 09:40:51.574579 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v858q" event={"ID":"bb20bec6-271d-44a9-9657-3d8cef23c902","Type":"ContainerStarted","Data":"72d4b2a3a9958873945fc583db26dac0dc72b132ba2a4478da37ab9510a9eb12"} Nov 25 09:40:51 crc kubenswrapper[5043]: I1125 09:40:51.592305 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v858q" podStartSLOduration=3.511166267 podStartE2EDuration="24.592290323s" podCreationTimestamp="2025-11-25 09:40:27 +0000 UTC" firstStartedPulling="2025-11-25 09:40:29.371765032 +0000 UTC m=+8693.539960753" lastFinishedPulling="2025-11-25 09:40:50.452889088 +0000 UTC m=+8714.621084809" observedRunningTime="2025-11-25 09:40:51.58957704 +0000 UTC m=+8715.757772761" watchObservedRunningTime="2025-11-25 09:40:51.592290323 +0000 UTC m=+8715.760486044" Nov 25 09:40:58 crc kubenswrapper[5043]: I1125 09:40:58.011162 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v858q" Nov 25 09:40:58 crc kubenswrapper[5043]: I1125 09:40:58.011547 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v858q" Nov 25 09:40:59 crc kubenswrapper[5043]: I1125 09:40:59.054514 5043 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v858q" podUID="bb20bec6-271d-44a9-9657-3d8cef23c902" containerName="registry-server" probeResult="failure" output=< Nov 25 09:40:59 crc kubenswrapper[5043]: timeout: failed to connect service ":50051" within 1s Nov 25 09:40:59 crc kubenswrapper[5043]: > Nov 25 09:41:05 crc kubenswrapper[5043]: I1125 09:41:05.963199 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:41:05 crc kubenswrapper[5043]: E1125 09:41:05.965333 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:41:08 crc kubenswrapper[5043]: I1125 09:41:08.064754 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v858q" Nov 25 09:41:08 crc kubenswrapper[5043]: I1125 09:41:08.116342 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v858q" Nov 25 09:41:08 crc kubenswrapper[5043]: I1125 09:41:08.305198 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v858q"] Nov 25 09:41:09 crc kubenswrapper[5043]: I1125 09:41:09.728789 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v858q" podUID="bb20bec6-271d-44a9-9657-3d8cef23c902" containerName="registry-server" containerID="cri-o://72d4b2a3a9958873945fc583db26dac0dc72b132ba2a4478da37ab9510a9eb12" gracePeriod=2 Nov 25 09:41:10 crc kubenswrapper[5043]: I1125 09:41:10.740107 5043 generic.go:334] "Generic (PLEG): container finished" podID="bb20bec6-271d-44a9-9657-3d8cef23c902" containerID="72d4b2a3a9958873945fc583db26dac0dc72b132ba2a4478da37ab9510a9eb12" exitCode=0 Nov 25 09:41:10 crc kubenswrapper[5043]: I1125 09:41:10.740176 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v858q" event={"ID":"bb20bec6-271d-44a9-9657-3d8cef23c902","Type":"ContainerDied","Data":"72d4b2a3a9958873945fc583db26dac0dc72b132ba2a4478da37ab9510a9eb12"} Nov 25 09:41:10 crc kubenswrapper[5043]: I1125 09:41:10.904413 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v858q" Nov 25 09:41:10 crc kubenswrapper[5043]: I1125 09:41:10.965411 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb20bec6-271d-44a9-9657-3d8cef23c902-utilities\") pod \"bb20bec6-271d-44a9-9657-3d8cef23c902\" (UID: \"bb20bec6-271d-44a9-9657-3d8cef23c902\") " Nov 25 09:41:10 crc kubenswrapper[5043]: I1125 09:41:10.966043 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb20bec6-271d-44a9-9657-3d8cef23c902-catalog-content\") pod \"bb20bec6-271d-44a9-9657-3d8cef23c902\" (UID: \"bb20bec6-271d-44a9-9657-3d8cef23c902\") " Nov 25 09:41:10 crc kubenswrapper[5043]: I1125 09:41:10.966073 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb7qj\" (UniqueName: \"kubernetes.io/projected/bb20bec6-271d-44a9-9657-3d8cef23c902-kube-api-access-bb7qj\") pod \"bb20bec6-271d-44a9-9657-3d8cef23c902\" (UID: \"bb20bec6-271d-44a9-9657-3d8cef23c902\") " Nov 25 09:41:10 crc kubenswrapper[5043]: I1125 09:41:10.967433 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb20bec6-271d-44a9-9657-3d8cef23c902-utilities" (OuterVolumeSpecName: "utilities") pod "bb20bec6-271d-44a9-9657-3d8cef23c902" (UID: "bb20bec6-271d-44a9-9657-3d8cef23c902"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:41:10 crc kubenswrapper[5043]: I1125 09:41:10.983998 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb20bec6-271d-44a9-9657-3d8cef23c902-kube-api-access-bb7qj" (OuterVolumeSpecName: "kube-api-access-bb7qj") pod "bb20bec6-271d-44a9-9657-3d8cef23c902" (UID: "bb20bec6-271d-44a9-9657-3d8cef23c902"). InnerVolumeSpecName "kube-api-access-bb7qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:41:11 crc kubenswrapper[5043]: I1125 09:41:11.067681 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb20bec6-271d-44a9-9657-3d8cef23c902-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb20bec6-271d-44a9-9657-3d8cef23c902" (UID: "bb20bec6-271d-44a9-9657-3d8cef23c902"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:41:11 crc kubenswrapper[5043]: I1125 09:41:11.069164 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb20bec6-271d-44a9-9657-3d8cef23c902-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:41:11 crc kubenswrapper[5043]: I1125 09:41:11.069200 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb7qj\" (UniqueName: \"kubernetes.io/projected/bb20bec6-271d-44a9-9657-3d8cef23c902-kube-api-access-bb7qj\") on node \"crc\" DevicePath \"\"" Nov 25 09:41:11 crc kubenswrapper[5043]: I1125 09:41:11.069217 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb20bec6-271d-44a9-9657-3d8cef23c902-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:41:11 crc kubenswrapper[5043]: I1125 09:41:11.749861 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v858q" event={"ID":"bb20bec6-271d-44a9-9657-3d8cef23c902","Type":"ContainerDied","Data":"a462f75cfef16ef417b7fd9a30f017ce1355800e7df40607bbc2386b3e08480d"} Nov 25 09:41:11 crc kubenswrapper[5043]: I1125 09:41:11.750259 5043 scope.go:117] "RemoveContainer" containerID="72d4b2a3a9958873945fc583db26dac0dc72b132ba2a4478da37ab9510a9eb12" Nov 25 09:41:11 crc kubenswrapper[5043]: I1125 09:41:11.749899 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v858q" Nov 25 09:41:11 crc kubenswrapper[5043]: I1125 09:41:11.779840 5043 scope.go:117] "RemoveContainer" containerID="fb3fcbda6dd6e73b384a59483af82b464aaca03c516e33c9a9b5868881f080ac" Nov 25 09:41:11 crc kubenswrapper[5043]: I1125 09:41:11.792846 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v858q"] Nov 25 09:41:11 crc kubenswrapper[5043]: I1125 09:41:11.798729 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v858q"] Nov 25 09:41:11 crc kubenswrapper[5043]: I1125 09:41:11.836885 5043 scope.go:117] "RemoveContainer" containerID="d1518158bd89491815c17fbf98d6e3da815c2841c79622c9bf816ea68ff0feb4" Nov 25 09:41:12 crc kubenswrapper[5043]: I1125 09:41:12.986278 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb20bec6-271d-44a9-9657-3d8cef23c902" path="/var/lib/kubelet/pods/bb20bec6-271d-44a9-9657-3d8cef23c902/volumes" Nov 25 09:41:16 crc kubenswrapper[5043]: I1125 09:41:16.973190 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:41:16 crc kubenswrapper[5043]: E1125 09:41:16.973926 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:41:28 crc kubenswrapper[5043]: I1125 09:41:28.967375 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:41:28 crc kubenswrapper[5043]: E1125 09:41:28.968136 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:41:42 crc kubenswrapper[5043]: I1125 09:41:42.964858 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:41:42 crc kubenswrapper[5043]: E1125 09:41:42.965589 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:41:53 crc kubenswrapper[5043]: I1125 09:41:53.962775 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:41:54 crc kubenswrapper[5043]: I1125 09:41:54.426093 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"a130716f04cf2be78c4090558741c401be0edb499b64015e99218420408c0fd5"} Nov 25 09:42:14 crc kubenswrapper[5043]: E1125 09:42:14.968261 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:42:22 crc kubenswrapper[5043]: I1125 09:42:22.945363 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x25fq"] Nov 25 09:42:22 crc kubenswrapper[5043]: E1125 09:42:22.946246 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb20bec6-271d-44a9-9657-3d8cef23c902" containerName="extract-content" Nov 25 09:42:22 crc kubenswrapper[5043]: I1125 09:42:22.946258 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb20bec6-271d-44a9-9657-3d8cef23c902" containerName="extract-content" Nov 25 09:42:22 crc kubenswrapper[5043]: E1125 09:42:22.946278 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb20bec6-271d-44a9-9657-3d8cef23c902" containerName="registry-server" Nov 25 09:42:22 crc kubenswrapper[5043]: I1125 09:42:22.946284 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb20bec6-271d-44a9-9657-3d8cef23c902" containerName="registry-server" Nov 25 09:42:22 crc kubenswrapper[5043]: E1125 09:42:22.946314 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb20bec6-271d-44a9-9657-3d8cef23c902" containerName="extract-utilities" Nov 25 09:42:22 crc kubenswrapper[5043]: I1125 09:42:22.946321 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb20bec6-271d-44a9-9657-3d8cef23c902" containerName="extract-utilities" Nov 25 09:42:22 crc kubenswrapper[5043]: I1125 09:42:22.946509 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb20bec6-271d-44a9-9657-3d8cef23c902" containerName="registry-server" Nov 25 09:42:22 crc kubenswrapper[5043]: I1125 09:42:22.947901 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x25fq" Nov 25 09:42:22 crc kubenswrapper[5043]: I1125 09:42:22.960113 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x25fq"] Nov 25 09:42:23 crc kubenswrapper[5043]: I1125 09:42:23.068715 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwfsw\" (UniqueName: \"kubernetes.io/projected/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-kube-api-access-bwfsw\") pod \"redhat-marketplace-x25fq\" (UID: \"deb4774b-fb50-41c8-ac9d-ab3ba87376e6\") " pod="openshift-marketplace/redhat-marketplace-x25fq" Nov 25 09:42:23 crc kubenswrapper[5043]: I1125 09:42:23.069618 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-catalog-content\") pod \"redhat-marketplace-x25fq\" (UID: \"deb4774b-fb50-41c8-ac9d-ab3ba87376e6\") " pod="openshift-marketplace/redhat-marketplace-x25fq" Nov 25 09:42:23 crc kubenswrapper[5043]: I1125 09:42:23.070166 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-utilities\") pod \"redhat-marketplace-x25fq\" (UID: \"deb4774b-fb50-41c8-ac9d-ab3ba87376e6\") " pod="openshift-marketplace/redhat-marketplace-x25fq" Nov 25 09:42:23 crc kubenswrapper[5043]: I1125 09:42:23.172173 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwfsw\" (UniqueName: \"kubernetes.io/projected/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-kube-api-access-bwfsw\") pod \"redhat-marketplace-x25fq\" (UID: \"deb4774b-fb50-41c8-ac9d-ab3ba87376e6\") " pod="openshift-marketplace/redhat-marketplace-x25fq" Nov 25 09:42:23 crc kubenswrapper[5043]: I1125 09:42:23.172697 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-catalog-content\") pod \"redhat-marketplace-x25fq\" (UID: \"deb4774b-fb50-41c8-ac9d-ab3ba87376e6\") " pod="openshift-marketplace/redhat-marketplace-x25fq" Nov 25 09:42:23 crc kubenswrapper[5043]: I1125 09:42:23.172253 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-catalog-content\") pod \"redhat-marketplace-x25fq\" (UID: \"deb4774b-fb50-41c8-ac9d-ab3ba87376e6\") " pod="openshift-marketplace/redhat-marketplace-x25fq" Nov 25 09:42:23 crc kubenswrapper[5043]: I1125 09:42:23.172793 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-utilities\") pod \"redhat-marketplace-x25fq\" (UID: \"deb4774b-fb50-41c8-ac9d-ab3ba87376e6\") " pod="openshift-marketplace/redhat-marketplace-x25fq" Nov 25 09:42:23 crc kubenswrapper[5043]: I1125 09:42:23.173039 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-utilities\") pod \"redhat-marketplace-x25fq\" (UID: \"deb4774b-fb50-41c8-ac9d-ab3ba87376e6\") " pod="openshift-marketplace/redhat-marketplace-x25fq" Nov 25 09:42:23 crc kubenswrapper[5043]: I1125 09:42:23.194951 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwfsw\" (UniqueName: \"kubernetes.io/projected/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-kube-api-access-bwfsw\") pod \"redhat-marketplace-x25fq\" (UID: \"deb4774b-fb50-41c8-ac9d-ab3ba87376e6\") " pod="openshift-marketplace/redhat-marketplace-x25fq" Nov 25 09:42:23 crc kubenswrapper[5043]: I1125 09:42:23.292936 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x25fq" Nov 25 09:42:23 crc kubenswrapper[5043]: I1125 09:42:23.751585 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x25fq"] Nov 25 09:42:24 crc kubenswrapper[5043]: I1125 09:42:24.752057 5043 generic.go:334] "Generic (PLEG): container finished" podID="deb4774b-fb50-41c8-ac9d-ab3ba87376e6" containerID="18baa6ab9eaaab564746fc3da07765bd180eec2f440089b952ce51bb3dc93dbe" exitCode=0 Nov 25 09:42:24 crc kubenswrapper[5043]: I1125 09:42:24.753438 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x25fq" event={"ID":"deb4774b-fb50-41c8-ac9d-ab3ba87376e6","Type":"ContainerDied","Data":"18baa6ab9eaaab564746fc3da07765bd180eec2f440089b952ce51bb3dc93dbe"} Nov 25 09:42:24 crc kubenswrapper[5043]: I1125 09:42:24.753521 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x25fq" event={"ID":"deb4774b-fb50-41c8-ac9d-ab3ba87376e6","Type":"ContainerStarted","Data":"3416880efde89e9cb9e11930266d8331f65521ce64506b9b5cd83838feca4558"} Nov 25 09:42:25 crc kubenswrapper[5043]: I1125 09:42:25.766260 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x25fq" event={"ID":"deb4774b-fb50-41c8-ac9d-ab3ba87376e6","Type":"ContainerStarted","Data":"817b0ee48762add08b118f6d534c4f96605d2d88de9d826b5ba04b4a53deed33"} Nov 25 09:42:26 crc kubenswrapper[5043]: I1125 09:42:26.786373 5043 generic.go:334] "Generic (PLEG): container finished" podID="deb4774b-fb50-41c8-ac9d-ab3ba87376e6" containerID="817b0ee48762add08b118f6d534c4f96605d2d88de9d826b5ba04b4a53deed33" exitCode=0 Nov 25 09:42:26 crc kubenswrapper[5043]: I1125 09:42:26.786521 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x25fq" event={"ID":"deb4774b-fb50-41c8-ac9d-ab3ba87376e6","Type":"ContainerDied","Data":"817b0ee48762add08b118f6d534c4f96605d2d88de9d826b5ba04b4a53deed33"} Nov 25 09:42:27 crc kubenswrapper[5043]: I1125 09:42:27.798781 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x25fq" event={"ID":"deb4774b-fb50-41c8-ac9d-ab3ba87376e6","Type":"ContainerStarted","Data":"70ee59307498bf487423449d997dd32cdbeb49af378b49d06b412fc3bc40b682"} Nov 25 09:42:27 crc kubenswrapper[5043]: I1125 09:42:27.820700 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x25fq" podStartSLOduration=3.281169969 podStartE2EDuration="5.820680171s" podCreationTimestamp="2025-11-25 09:42:22 +0000 UTC" firstStartedPulling="2025-11-25 09:42:24.756424075 +0000 UTC m=+8808.924619796" lastFinishedPulling="2025-11-25 09:42:27.295934277 +0000 UTC m=+8811.464129998" observedRunningTime="2025-11-25 09:42:27.816302153 +0000 UTC m=+8811.984497874" watchObservedRunningTime="2025-11-25 09:42:27.820680171 +0000 UTC m=+8811.988875892" Nov 25 09:42:33 crc kubenswrapper[5043]: I1125 09:42:33.294032 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x25fq" Nov 25 09:42:33 crc kubenswrapper[5043]: I1125 09:42:33.294683 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x25fq" Nov 25 09:42:33 crc kubenswrapper[5043]: I1125 09:42:33.354433 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x25fq" Nov 25 09:42:33 crc kubenswrapper[5043]: I1125 09:42:33.916339 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x25fq" Nov 25 09:42:33 crc kubenswrapper[5043]: I1125 09:42:33.967758 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x25fq"] Nov 25 09:42:35 crc kubenswrapper[5043]: I1125 09:42:35.885949 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x25fq" podUID="deb4774b-fb50-41c8-ac9d-ab3ba87376e6" containerName="registry-server" containerID="cri-o://70ee59307498bf487423449d997dd32cdbeb49af378b49d06b412fc3bc40b682" gracePeriod=2 Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.354578 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x25fq" Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.436368 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwfsw\" (UniqueName: \"kubernetes.io/projected/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-kube-api-access-bwfsw\") pod \"deb4774b-fb50-41c8-ac9d-ab3ba87376e6\" (UID: \"deb4774b-fb50-41c8-ac9d-ab3ba87376e6\") " Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.436591 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-catalog-content\") pod \"deb4774b-fb50-41c8-ac9d-ab3ba87376e6\" (UID: \"deb4774b-fb50-41c8-ac9d-ab3ba87376e6\") " Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.436649 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-utilities\") pod \"deb4774b-fb50-41c8-ac9d-ab3ba87376e6\" (UID: \"deb4774b-fb50-41c8-ac9d-ab3ba87376e6\") " Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.437515 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-utilities" (OuterVolumeSpecName: "utilities") pod "deb4774b-fb50-41c8-ac9d-ab3ba87376e6" (UID: "deb4774b-fb50-41c8-ac9d-ab3ba87376e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.443511 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-kube-api-access-bwfsw" (OuterVolumeSpecName: "kube-api-access-bwfsw") pod "deb4774b-fb50-41c8-ac9d-ab3ba87376e6" (UID: "deb4774b-fb50-41c8-ac9d-ab3ba87376e6"). InnerVolumeSpecName "kube-api-access-bwfsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.457237 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "deb4774b-fb50-41c8-ac9d-ab3ba87376e6" (UID: "deb4774b-fb50-41c8-ac9d-ab3ba87376e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.538576 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwfsw\" (UniqueName: \"kubernetes.io/projected/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-kube-api-access-bwfsw\") on node \"crc\" DevicePath \"\"" Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.538624 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.538634 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb4774b-fb50-41c8-ac9d-ab3ba87376e6-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.899252 5043 generic.go:334] "Generic (PLEG): container finished" podID="deb4774b-fb50-41c8-ac9d-ab3ba87376e6" containerID="70ee59307498bf487423449d997dd32cdbeb49af378b49d06b412fc3bc40b682" exitCode=0 Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.899319 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x25fq" event={"ID":"deb4774b-fb50-41c8-ac9d-ab3ba87376e6","Type":"ContainerDied","Data":"70ee59307498bf487423449d997dd32cdbeb49af378b49d06b412fc3bc40b682"} Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.899370 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x25fq" Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.899630 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x25fq" event={"ID":"deb4774b-fb50-41c8-ac9d-ab3ba87376e6","Type":"ContainerDied","Data":"3416880efde89e9cb9e11930266d8331f65521ce64506b9b5cd83838feca4558"} Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.899657 5043 scope.go:117] "RemoveContainer" containerID="70ee59307498bf487423449d997dd32cdbeb49af378b49d06b412fc3bc40b682" Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.928951 5043 scope.go:117] "RemoveContainer" containerID="817b0ee48762add08b118f6d534c4f96605d2d88de9d826b5ba04b4a53deed33" Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.956047 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x25fq"] Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.965029 5043 scope.go:117] "RemoveContainer" containerID="18baa6ab9eaaab564746fc3da07765bd180eec2f440089b952ce51bb3dc93dbe" Nov 25 09:42:36 crc kubenswrapper[5043]: I1125 09:42:36.976877 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x25fq"] Nov 25 09:42:37 crc kubenswrapper[5043]: I1125 09:42:37.022549 5043 scope.go:117] "RemoveContainer" containerID="70ee59307498bf487423449d997dd32cdbeb49af378b49d06b412fc3bc40b682" Nov 25 09:42:37 crc kubenswrapper[5043]: E1125 09:42:37.023130 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70ee59307498bf487423449d997dd32cdbeb49af378b49d06b412fc3bc40b682\": container with ID starting with 70ee59307498bf487423449d997dd32cdbeb49af378b49d06b412fc3bc40b682 not found: ID does not exist" containerID="70ee59307498bf487423449d997dd32cdbeb49af378b49d06b412fc3bc40b682" Nov 25 09:42:37 crc kubenswrapper[5043]: I1125 09:42:37.023171 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70ee59307498bf487423449d997dd32cdbeb49af378b49d06b412fc3bc40b682"} err="failed to get container status \"70ee59307498bf487423449d997dd32cdbeb49af378b49d06b412fc3bc40b682\": rpc error: code = NotFound desc = could not find container \"70ee59307498bf487423449d997dd32cdbeb49af378b49d06b412fc3bc40b682\": container with ID starting with 70ee59307498bf487423449d997dd32cdbeb49af378b49d06b412fc3bc40b682 not found: ID does not exist" Nov 25 09:42:37 crc kubenswrapper[5043]: I1125 09:42:37.023199 5043 scope.go:117] "RemoveContainer" containerID="817b0ee48762add08b118f6d534c4f96605d2d88de9d826b5ba04b4a53deed33" Nov 25 09:42:37 crc kubenswrapper[5043]: E1125 09:42:37.023591 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"817b0ee48762add08b118f6d534c4f96605d2d88de9d826b5ba04b4a53deed33\": container with ID starting with 817b0ee48762add08b118f6d534c4f96605d2d88de9d826b5ba04b4a53deed33 not found: ID does not exist" containerID="817b0ee48762add08b118f6d534c4f96605d2d88de9d826b5ba04b4a53deed33" Nov 25 09:42:37 crc kubenswrapper[5043]: I1125 09:42:37.023659 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"817b0ee48762add08b118f6d534c4f96605d2d88de9d826b5ba04b4a53deed33"} err="failed to get container status \"817b0ee48762add08b118f6d534c4f96605d2d88de9d826b5ba04b4a53deed33\": rpc error: code = NotFound desc = could not find container \"817b0ee48762add08b118f6d534c4f96605d2d88de9d826b5ba04b4a53deed33\": container with ID starting with 817b0ee48762add08b118f6d534c4f96605d2d88de9d826b5ba04b4a53deed33 not found: ID does not exist" Nov 25 09:42:37 crc kubenswrapper[5043]: I1125 09:42:37.023678 5043 scope.go:117] "RemoveContainer" containerID="18baa6ab9eaaab564746fc3da07765bd180eec2f440089b952ce51bb3dc93dbe" Nov 25 09:42:37 crc kubenswrapper[5043]: E1125 09:42:37.024097 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18baa6ab9eaaab564746fc3da07765bd180eec2f440089b952ce51bb3dc93dbe\": container with ID starting with 18baa6ab9eaaab564746fc3da07765bd180eec2f440089b952ce51bb3dc93dbe not found: ID does not exist" containerID="18baa6ab9eaaab564746fc3da07765bd180eec2f440089b952ce51bb3dc93dbe" Nov 25 09:42:37 crc kubenswrapper[5043]: I1125 09:42:37.024171 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18baa6ab9eaaab564746fc3da07765bd180eec2f440089b952ce51bb3dc93dbe"} err="failed to get container status \"18baa6ab9eaaab564746fc3da07765bd180eec2f440089b952ce51bb3dc93dbe\": rpc error: code = NotFound desc = could not find container \"18baa6ab9eaaab564746fc3da07765bd180eec2f440089b952ce51bb3dc93dbe\": container with ID starting with 18baa6ab9eaaab564746fc3da07765bd180eec2f440089b952ce51bb3dc93dbe not found: ID does not exist" Nov 25 09:42:38 crc kubenswrapper[5043]: I1125 09:42:38.975488 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb4774b-fb50-41c8-ac9d-ab3ba87376e6" path="/var/lib/kubelet/pods/deb4774b-fb50-41c8-ac9d-ab3ba87376e6/volumes" Nov 25 09:42:40 crc kubenswrapper[5043]: I1125 09:42:40.940232 5043 generic.go:334] "Generic (PLEG): container finished" podID="998025b2-ce6c-47f2-983d-a5f4215c1bd9" containerID="61fc30633e99fa438a72b954f1d0a3008f82f577f30e0edb9284ceeeb4b2f6d4" exitCode=0 Nov 25 09:42:40 crc kubenswrapper[5043]: I1125 09:42:40.940374 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vfwzj/must-gather-htgmv" event={"ID":"998025b2-ce6c-47f2-983d-a5f4215c1bd9","Type":"ContainerDied","Data":"61fc30633e99fa438a72b954f1d0a3008f82f577f30e0edb9284ceeeb4b2f6d4"} Nov 25 09:42:40 crc kubenswrapper[5043]: I1125 09:42:40.941537 5043 scope.go:117] "RemoveContainer" containerID="61fc30633e99fa438a72b954f1d0a3008f82f577f30e0edb9284ceeeb4b2f6d4" Nov 25 09:42:41 crc kubenswrapper[5043]: I1125 09:42:41.364046 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vfwzj_must-gather-htgmv_998025b2-ce6c-47f2-983d-a5f4215c1bd9/gather/0.log" Nov 25 09:42:49 crc kubenswrapper[5043]: I1125 09:42:49.624463 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vfwzj/must-gather-htgmv"] Nov 25 09:42:49 crc kubenswrapper[5043]: I1125 09:42:49.625261 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vfwzj/must-gather-htgmv" podUID="998025b2-ce6c-47f2-983d-a5f4215c1bd9" containerName="copy" containerID="cri-o://901669183c4d36d2f56df153ae4a97f45017ed2ee783a0aa9cba1de0c8c4fb1d" gracePeriod=2 Nov 25 09:42:49 crc kubenswrapper[5043]: I1125 09:42:49.635294 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vfwzj/must-gather-htgmv"] Nov 25 09:42:50 crc kubenswrapper[5043]: I1125 09:42:50.032354 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vfwzj_must-gather-htgmv_998025b2-ce6c-47f2-983d-a5f4215c1bd9/copy/0.log" Nov 25 09:42:50 crc kubenswrapper[5043]: I1125 09:42:50.033030 5043 generic.go:334] "Generic (PLEG): container finished" podID="998025b2-ce6c-47f2-983d-a5f4215c1bd9" containerID="901669183c4d36d2f56df153ae4a97f45017ed2ee783a0aa9cba1de0c8c4fb1d" exitCode=143 Nov 25 09:42:50 crc kubenswrapper[5043]: I1125 09:42:50.033074 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f91a70e4f6673f41faf9985f75e86f6a54d3c83bba45aa5fae9d77dc995ce8d6" Nov 25 09:42:50 crc kubenswrapper[5043]: I1125 09:42:50.113872 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vfwzj_must-gather-htgmv_998025b2-ce6c-47f2-983d-a5f4215c1bd9/copy/0.log" Nov 25 09:42:50 crc kubenswrapper[5043]: I1125 09:42:50.114305 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfwzj/must-gather-htgmv" Nov 25 09:42:50 crc kubenswrapper[5043]: I1125 09:42:50.210193 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/998025b2-ce6c-47f2-983d-a5f4215c1bd9-must-gather-output\") pod \"998025b2-ce6c-47f2-983d-a5f4215c1bd9\" (UID: \"998025b2-ce6c-47f2-983d-a5f4215c1bd9\") " Nov 25 09:42:50 crc kubenswrapper[5043]: I1125 09:42:50.210502 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkrqb\" (UniqueName: \"kubernetes.io/projected/998025b2-ce6c-47f2-983d-a5f4215c1bd9-kube-api-access-xkrqb\") pod \"998025b2-ce6c-47f2-983d-a5f4215c1bd9\" (UID: \"998025b2-ce6c-47f2-983d-a5f4215c1bd9\") " Nov 25 09:42:50 crc kubenswrapper[5043]: I1125 09:42:50.216197 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/998025b2-ce6c-47f2-983d-a5f4215c1bd9-kube-api-access-xkrqb" (OuterVolumeSpecName: "kube-api-access-xkrqb") pod "998025b2-ce6c-47f2-983d-a5f4215c1bd9" (UID: "998025b2-ce6c-47f2-983d-a5f4215c1bd9"). InnerVolumeSpecName "kube-api-access-xkrqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:42:50 crc kubenswrapper[5043]: I1125 09:42:50.313068 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkrqb\" (UniqueName: \"kubernetes.io/projected/998025b2-ce6c-47f2-983d-a5f4215c1bd9-kube-api-access-xkrqb\") on node \"crc\" DevicePath \"\"" Nov 25 09:42:50 crc kubenswrapper[5043]: I1125 09:42:50.389443 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/998025b2-ce6c-47f2-983d-a5f4215c1bd9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "998025b2-ce6c-47f2-983d-a5f4215c1bd9" (UID: "998025b2-ce6c-47f2-983d-a5f4215c1bd9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:42:50 crc kubenswrapper[5043]: I1125 09:42:50.415816 5043 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/998025b2-ce6c-47f2-983d-a5f4215c1bd9-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 09:42:50 crc kubenswrapper[5043]: I1125 09:42:50.973444 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="998025b2-ce6c-47f2-983d-a5f4215c1bd9" path="/var/lib/kubelet/pods/998025b2-ce6c-47f2-983d-a5f4215c1bd9/volumes" Nov 25 09:42:51 crc kubenswrapper[5043]: I1125 09:42:51.044298 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfwzj/must-gather-htgmv" Nov 25 09:42:56 crc kubenswrapper[5043]: I1125 09:42:56.181858 5043 scope.go:117] "RemoveContainer" containerID="61fc30633e99fa438a72b954f1d0a3008f82f577f30e0edb9284ceeeb4b2f6d4" Nov 25 09:42:56 crc kubenswrapper[5043]: I1125 09:42:56.313987 5043 scope.go:117] "RemoveContainer" containerID="901669183c4d36d2f56df153ae4a97f45017ed2ee783a0aa9cba1de0c8c4fb1d" Nov 25 09:43:43 crc kubenswrapper[5043]: E1125 09:43:43.965139 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:43:56 crc kubenswrapper[5043]: I1125 09:43:56.381489 5043 scope.go:117] "RemoveContainer" containerID="2eae8380b688d2219bda1b06c099dec0809c947930dd32856501dfdaaf504e32" Nov 25 09:44:17 crc kubenswrapper[5043]: I1125 09:44:17.276338 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:44:17 crc kubenswrapper[5043]: I1125 09:44:17.277033 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:44:47 crc kubenswrapper[5043]: I1125 09:44:47.276581 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:44:47 crc kubenswrapper[5043]: I1125 09:44:47.277236 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:44:50 crc kubenswrapper[5043]: E1125 09:44:50.963048 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.423584 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-99mdb"] Nov 25 09:44:58 crc kubenswrapper[5043]: E1125 09:44:58.439592 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998025b2-ce6c-47f2-983d-a5f4215c1bd9" containerName="copy" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.439676 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="998025b2-ce6c-47f2-983d-a5f4215c1bd9" containerName="copy" Nov 25 09:44:58 crc kubenswrapper[5043]: E1125 09:44:58.439703 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb4774b-fb50-41c8-ac9d-ab3ba87376e6" containerName="extract-utilities" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.439714 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb4774b-fb50-41c8-ac9d-ab3ba87376e6" containerName="extract-utilities" Nov 25 09:44:58 crc kubenswrapper[5043]: E1125 09:44:58.439751 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb4774b-fb50-41c8-ac9d-ab3ba87376e6" containerName="extract-content" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.439762 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb4774b-fb50-41c8-ac9d-ab3ba87376e6" containerName="extract-content" Nov 25 09:44:58 crc kubenswrapper[5043]: E1125 09:44:58.439791 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb4774b-fb50-41c8-ac9d-ab3ba87376e6" containerName="registry-server" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.439803 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb4774b-fb50-41c8-ac9d-ab3ba87376e6" containerName="registry-server" Nov 25 09:44:58 crc kubenswrapper[5043]: E1125 09:44:58.439819 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998025b2-ce6c-47f2-983d-a5f4215c1bd9" containerName="gather" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.439827 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="998025b2-ce6c-47f2-983d-a5f4215c1bd9" containerName="gather" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.440089 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="998025b2-ce6c-47f2-983d-a5f4215c1bd9" containerName="gather" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.440110 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb4774b-fb50-41c8-ac9d-ab3ba87376e6" containerName="registry-server" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.440125 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="998025b2-ce6c-47f2-983d-a5f4215c1bd9" containerName="copy" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.442275 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-99mdb"] Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.442423 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99mdb" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.546183 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgzbj\" (UniqueName: \"kubernetes.io/projected/4a4be34d-ea0d-4ac2-9971-ac90903032d1-kube-api-access-cgzbj\") pod \"community-operators-99mdb\" (UID: \"4a4be34d-ea0d-4ac2-9971-ac90903032d1\") " pod="openshift-marketplace/community-operators-99mdb" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.546234 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4be34d-ea0d-4ac2-9971-ac90903032d1-catalog-content\") pod \"community-operators-99mdb\" (UID: \"4a4be34d-ea0d-4ac2-9971-ac90903032d1\") " pod="openshift-marketplace/community-operators-99mdb" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.546278 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4be34d-ea0d-4ac2-9971-ac90903032d1-utilities\") pod \"community-operators-99mdb\" (UID: \"4a4be34d-ea0d-4ac2-9971-ac90903032d1\") " pod="openshift-marketplace/community-operators-99mdb" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.648191 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgzbj\" (UniqueName: \"kubernetes.io/projected/4a4be34d-ea0d-4ac2-9971-ac90903032d1-kube-api-access-cgzbj\") pod \"community-operators-99mdb\" (UID: \"4a4be34d-ea0d-4ac2-9971-ac90903032d1\") " pod="openshift-marketplace/community-operators-99mdb" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.648256 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4be34d-ea0d-4ac2-9971-ac90903032d1-catalog-content\") pod \"community-operators-99mdb\" (UID: \"4a4be34d-ea0d-4ac2-9971-ac90903032d1\") " pod="openshift-marketplace/community-operators-99mdb" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.648299 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4be34d-ea0d-4ac2-9971-ac90903032d1-utilities\") pod \"community-operators-99mdb\" (UID: \"4a4be34d-ea0d-4ac2-9971-ac90903032d1\") " pod="openshift-marketplace/community-operators-99mdb" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.648943 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4be34d-ea0d-4ac2-9971-ac90903032d1-utilities\") pod \"community-operators-99mdb\" (UID: \"4a4be34d-ea0d-4ac2-9971-ac90903032d1\") " pod="openshift-marketplace/community-operators-99mdb" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.649260 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4be34d-ea0d-4ac2-9971-ac90903032d1-catalog-content\") pod \"community-operators-99mdb\" (UID: \"4a4be34d-ea0d-4ac2-9971-ac90903032d1\") " pod="openshift-marketplace/community-operators-99mdb" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.675420 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgzbj\" (UniqueName: \"kubernetes.io/projected/4a4be34d-ea0d-4ac2-9971-ac90903032d1-kube-api-access-cgzbj\") pod \"community-operators-99mdb\" (UID: \"4a4be34d-ea0d-4ac2-9971-ac90903032d1\") " pod="openshift-marketplace/community-operators-99mdb" Nov 25 09:44:58 crc kubenswrapper[5043]: I1125 09:44:58.771500 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99mdb" Nov 25 09:44:59 crc kubenswrapper[5043]: I1125 09:44:59.338690 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-99mdb"] Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.186098 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j"] Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.188166 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.194140 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.194274 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.201532 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j"] Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.278152 5043 generic.go:334] "Generic (PLEG): container finished" podID="4a4be34d-ea0d-4ac2-9971-ac90903032d1" containerID="c1b7d8c6b8586e6c03b85a11d8183b0c5d41461ee011311848e31e4d380a5d48" exitCode=0 Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.278205 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99mdb" event={"ID":"4a4be34d-ea0d-4ac2-9971-ac90903032d1","Type":"ContainerDied","Data":"c1b7d8c6b8586e6c03b85a11d8183b0c5d41461ee011311848e31e4d380a5d48"} Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.278235 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99mdb" event={"ID":"4a4be34d-ea0d-4ac2-9971-ac90903032d1","Type":"ContainerStarted","Data":"c690ce5603d49f8c844780ed8eef917b08a2249c61456e77ddd07700c9f74a23"} Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.279141 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqvz9\" (UniqueName: \"kubernetes.io/projected/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-kube-api-access-dqvz9\") pod \"collect-profiles-29401065-z527j\" (UID: \"cb7107af-afc3-4c0b-bb84-f71fa8cf7751\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.279431 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-config-volume\") pod \"collect-profiles-29401065-z527j\" (UID: \"cb7107af-afc3-4c0b-bb84-f71fa8cf7751\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.279472 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-secret-volume\") pod \"collect-profiles-29401065-z527j\" (UID: \"cb7107af-afc3-4c0b-bb84-f71fa8cf7751\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.381783 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqvz9\" (UniqueName: \"kubernetes.io/projected/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-kube-api-access-dqvz9\") pod \"collect-profiles-29401065-z527j\" (UID: \"cb7107af-afc3-4c0b-bb84-f71fa8cf7751\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.382245 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-config-volume\") pod \"collect-profiles-29401065-z527j\" (UID: \"cb7107af-afc3-4c0b-bb84-f71fa8cf7751\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.382271 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-secret-volume\") pod \"collect-profiles-29401065-z527j\" (UID: \"cb7107af-afc3-4c0b-bb84-f71fa8cf7751\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.383513 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-config-volume\") pod \"collect-profiles-29401065-z527j\" (UID: \"cb7107af-afc3-4c0b-bb84-f71fa8cf7751\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.389638 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-secret-volume\") pod \"collect-profiles-29401065-z527j\" (UID: \"cb7107af-afc3-4c0b-bb84-f71fa8cf7751\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.401136 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqvz9\" (UniqueName: \"kubernetes.io/projected/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-kube-api-access-dqvz9\") pod \"collect-profiles-29401065-z527j\" (UID: \"cb7107af-afc3-4c0b-bb84-f71fa8cf7751\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.516388 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" Nov 25 09:45:00 crc kubenswrapper[5043]: W1125 09:45:00.971028 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb7107af_afc3_4c0b_bb84_f71fa8cf7751.slice/crio-77a8b2a4ce9b7d2b0580e0e657fb9355280207e03057a8c76bc0def217acc829 WatchSource:0}: Error finding container 77a8b2a4ce9b7d2b0580e0e657fb9355280207e03057a8c76bc0def217acc829: Status 404 returned error can't find the container with id 77a8b2a4ce9b7d2b0580e0e657fb9355280207e03057a8c76bc0def217acc829 Nov 25 09:45:00 crc kubenswrapper[5043]: I1125 09:45:00.986468 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j"] Nov 25 09:45:01 crc kubenswrapper[5043]: I1125 09:45:01.014925 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x9p9l"] Nov 25 09:45:01 crc kubenswrapper[5043]: I1125 09:45:01.018485 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x9p9l" Nov 25 09:45:01 crc kubenswrapper[5043]: I1125 09:45:01.035376 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x9p9l"] Nov 25 09:45:01 crc kubenswrapper[5043]: I1125 09:45:01.096739 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92edf065-65e9-46da-9cc2-ac7a63074725-utilities\") pod \"certified-operators-x9p9l\" (UID: \"92edf065-65e9-46da-9cc2-ac7a63074725\") " pod="openshift-marketplace/certified-operators-x9p9l" Nov 25 09:45:01 crc kubenswrapper[5043]: I1125 09:45:01.096856 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92edf065-65e9-46da-9cc2-ac7a63074725-catalog-content\") pod \"certified-operators-x9p9l\" (UID: \"92edf065-65e9-46da-9cc2-ac7a63074725\") " pod="openshift-marketplace/certified-operators-x9p9l" Nov 25 09:45:01 crc kubenswrapper[5043]: I1125 09:45:01.096922 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knkhm\" (UniqueName: \"kubernetes.io/projected/92edf065-65e9-46da-9cc2-ac7a63074725-kube-api-access-knkhm\") pod \"certified-operators-x9p9l\" (UID: \"92edf065-65e9-46da-9cc2-ac7a63074725\") " pod="openshift-marketplace/certified-operators-x9p9l" Nov 25 09:45:01 crc kubenswrapper[5043]: I1125 09:45:01.198717 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92edf065-65e9-46da-9cc2-ac7a63074725-utilities\") pod \"certified-operators-x9p9l\" (UID: \"92edf065-65e9-46da-9cc2-ac7a63074725\") " pod="openshift-marketplace/certified-operators-x9p9l" Nov 25 09:45:01 crc kubenswrapper[5043]: I1125 09:45:01.199150 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92edf065-65e9-46da-9cc2-ac7a63074725-catalog-content\") pod \"certified-operators-x9p9l\" (UID: \"92edf065-65e9-46da-9cc2-ac7a63074725\") " pod="openshift-marketplace/certified-operators-x9p9l" Nov 25 09:45:01 crc kubenswrapper[5043]: I1125 09:45:01.199218 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92edf065-65e9-46da-9cc2-ac7a63074725-utilities\") pod \"certified-operators-x9p9l\" (UID: \"92edf065-65e9-46da-9cc2-ac7a63074725\") " pod="openshift-marketplace/certified-operators-x9p9l" Nov 25 09:45:01 crc kubenswrapper[5043]: I1125 09:45:01.199225 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knkhm\" (UniqueName: \"kubernetes.io/projected/92edf065-65e9-46da-9cc2-ac7a63074725-kube-api-access-knkhm\") pod \"certified-operators-x9p9l\" (UID: \"92edf065-65e9-46da-9cc2-ac7a63074725\") " pod="openshift-marketplace/certified-operators-x9p9l" Nov 25 09:45:01 crc kubenswrapper[5043]: I1125 09:45:01.199661 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92edf065-65e9-46da-9cc2-ac7a63074725-catalog-content\") pod \"certified-operators-x9p9l\" (UID: \"92edf065-65e9-46da-9cc2-ac7a63074725\") " pod="openshift-marketplace/certified-operators-x9p9l" Nov 25 09:45:01 crc kubenswrapper[5043]: I1125 09:45:01.222007 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knkhm\" (UniqueName: \"kubernetes.io/projected/92edf065-65e9-46da-9cc2-ac7a63074725-kube-api-access-knkhm\") pod \"certified-operators-x9p9l\" (UID: \"92edf065-65e9-46da-9cc2-ac7a63074725\") " pod="openshift-marketplace/certified-operators-x9p9l" Nov 25 09:45:01 crc kubenswrapper[5043]: I1125 09:45:01.293195 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" event={"ID":"cb7107af-afc3-4c0b-bb84-f71fa8cf7751","Type":"ContainerStarted","Data":"06f8505f26a0f0275eda3d7d219c5b0fb18aa2fb4f198a425dfd4f1e1e570f74"} Nov 25 09:45:01 crc kubenswrapper[5043]: I1125 09:45:01.293247 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" event={"ID":"cb7107af-afc3-4c0b-bb84-f71fa8cf7751","Type":"ContainerStarted","Data":"77a8b2a4ce9b7d2b0580e0e657fb9355280207e03057a8c76bc0def217acc829"} Nov 25 09:45:01 crc kubenswrapper[5043]: I1125 09:45:01.308303 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" podStartSLOduration=1.308288136 podStartE2EDuration="1.308288136s" podCreationTimestamp="2025-11-25 09:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:45:01.306078766 +0000 UTC m=+8965.474274487" watchObservedRunningTime="2025-11-25 09:45:01.308288136 +0000 UTC m=+8965.476483857" Nov 25 09:45:01 crc kubenswrapper[5043]: I1125 09:45:01.387623 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x9p9l" Nov 25 09:45:01 crc kubenswrapper[5043]: W1125 09:45:01.950433 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92edf065_65e9_46da_9cc2_ac7a63074725.slice/crio-b62a67d6992c73d9381a7df986832d77034d4050aaa9b809230c493265ea291e WatchSource:0}: Error finding container b62a67d6992c73d9381a7df986832d77034d4050aaa9b809230c493265ea291e: Status 404 returned error can't find the container with id b62a67d6992c73d9381a7df986832d77034d4050aaa9b809230c493265ea291e Nov 25 09:45:01 crc kubenswrapper[5043]: I1125 09:45:01.951721 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x9p9l"] Nov 25 09:45:02 crc kubenswrapper[5043]: I1125 09:45:02.306767 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99mdb" event={"ID":"4a4be34d-ea0d-4ac2-9971-ac90903032d1","Type":"ContainerStarted","Data":"8ee3fa5bff35e450bac3038e5e55879d5a9ec6c4b94ab592c862693fb317fbf7"} Nov 25 09:45:02 crc kubenswrapper[5043]: I1125 09:45:02.309283 5043 generic.go:334] "Generic (PLEG): container finished" podID="cb7107af-afc3-4c0b-bb84-f71fa8cf7751" containerID="06f8505f26a0f0275eda3d7d219c5b0fb18aa2fb4f198a425dfd4f1e1e570f74" exitCode=0 Nov 25 09:45:02 crc kubenswrapper[5043]: I1125 09:45:02.309346 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" event={"ID":"cb7107af-afc3-4c0b-bb84-f71fa8cf7751","Type":"ContainerDied","Data":"06f8505f26a0f0275eda3d7d219c5b0fb18aa2fb4f198a425dfd4f1e1e570f74"} Nov 25 09:45:02 crc kubenswrapper[5043]: I1125 09:45:02.311712 5043 generic.go:334] "Generic (PLEG): container finished" podID="92edf065-65e9-46da-9cc2-ac7a63074725" containerID="886bc2b12f078018a1a51b3b281b8e77d610fe50b1c484e4ead8ec18426c8369" exitCode=0 Nov 25 09:45:02 crc kubenswrapper[5043]: I1125 09:45:02.311779 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x9p9l" event={"ID":"92edf065-65e9-46da-9cc2-ac7a63074725","Type":"ContainerDied","Data":"886bc2b12f078018a1a51b3b281b8e77d610fe50b1c484e4ead8ec18426c8369"} Nov 25 09:45:02 crc kubenswrapper[5043]: I1125 09:45:02.311818 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x9p9l" event={"ID":"92edf065-65e9-46da-9cc2-ac7a63074725","Type":"ContainerStarted","Data":"b62a67d6992c73d9381a7df986832d77034d4050aaa9b809230c493265ea291e"} Nov 25 09:45:03 crc kubenswrapper[5043]: I1125 09:45:03.707821 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" Nov 25 09:45:03 crc kubenswrapper[5043]: I1125 09:45:03.769101 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqvz9\" (UniqueName: \"kubernetes.io/projected/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-kube-api-access-dqvz9\") pod \"cb7107af-afc3-4c0b-bb84-f71fa8cf7751\" (UID: \"cb7107af-afc3-4c0b-bb84-f71fa8cf7751\") " Nov 25 09:45:03 crc kubenswrapper[5043]: I1125 09:45:03.769221 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-config-volume\") pod \"cb7107af-afc3-4c0b-bb84-f71fa8cf7751\" (UID: \"cb7107af-afc3-4c0b-bb84-f71fa8cf7751\") " Nov 25 09:45:03 crc kubenswrapper[5043]: I1125 09:45:03.769297 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-secret-volume\") pod \"cb7107af-afc3-4c0b-bb84-f71fa8cf7751\" (UID: \"cb7107af-afc3-4c0b-bb84-f71fa8cf7751\") " Nov 25 09:45:03 crc kubenswrapper[5043]: I1125 09:45:03.769929 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-config-volume" (OuterVolumeSpecName: "config-volume") pod "cb7107af-afc3-4c0b-bb84-f71fa8cf7751" (UID: "cb7107af-afc3-4c0b-bb84-f71fa8cf7751"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 09:45:03 crc kubenswrapper[5043]: I1125 09:45:03.770152 5043 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:45:03 crc kubenswrapper[5043]: I1125 09:45:03.776948 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cb7107af-afc3-4c0b-bb84-f71fa8cf7751" (UID: "cb7107af-afc3-4c0b-bb84-f71fa8cf7751"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 09:45:03 crc kubenswrapper[5043]: I1125 09:45:03.777123 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-kube-api-access-dqvz9" (OuterVolumeSpecName: "kube-api-access-dqvz9") pod "cb7107af-afc3-4c0b-bb84-f71fa8cf7751" (UID: "cb7107af-afc3-4c0b-bb84-f71fa8cf7751"). InnerVolumeSpecName "kube-api-access-dqvz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:45:03 crc kubenswrapper[5043]: I1125 09:45:03.872215 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqvz9\" (UniqueName: \"kubernetes.io/projected/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-kube-api-access-dqvz9\") on node \"crc\" DevicePath \"\"" Nov 25 09:45:03 crc kubenswrapper[5043]: I1125 09:45:03.872245 5043 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb7107af-afc3-4c0b-bb84-f71fa8cf7751-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 09:45:04 crc kubenswrapper[5043]: I1125 09:45:04.336511 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" event={"ID":"cb7107af-afc3-4c0b-bb84-f71fa8cf7751","Type":"ContainerDied","Data":"77a8b2a4ce9b7d2b0580e0e657fb9355280207e03057a8c76bc0def217acc829"} Nov 25 09:45:04 crc kubenswrapper[5043]: I1125 09:45:04.336559 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77a8b2a4ce9b7d2b0580e0e657fb9355280207e03057a8c76bc0def217acc829" Nov 25 09:45:04 crc kubenswrapper[5043]: I1125 09:45:04.336566 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401065-z527j" Nov 25 09:45:04 crc kubenswrapper[5043]: I1125 09:45:04.339348 5043 generic.go:334] "Generic (PLEG): container finished" podID="92edf065-65e9-46da-9cc2-ac7a63074725" containerID="529f25a5869ffecb39bfccbb57f437836269e4b3f04427975321d4286c37564b" exitCode=0 Nov 25 09:45:04 crc kubenswrapper[5043]: I1125 09:45:04.339412 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x9p9l" event={"ID":"92edf065-65e9-46da-9cc2-ac7a63074725","Type":"ContainerDied","Data":"529f25a5869ffecb39bfccbb57f437836269e4b3f04427975321d4286c37564b"} Nov 25 09:45:04 crc kubenswrapper[5043]: I1125 09:45:04.343409 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99mdb" event={"ID":"4a4be34d-ea0d-4ac2-9971-ac90903032d1","Type":"ContainerDied","Data":"8ee3fa5bff35e450bac3038e5e55879d5a9ec6c4b94ab592c862693fb317fbf7"} Nov 25 09:45:04 crc kubenswrapper[5043]: I1125 09:45:04.345068 5043 generic.go:334] "Generic (PLEG): container finished" podID="4a4be34d-ea0d-4ac2-9971-ac90903032d1" containerID="8ee3fa5bff35e450bac3038e5e55879d5a9ec6c4b94ab592c862693fb317fbf7" exitCode=0 Nov 25 09:45:04 crc kubenswrapper[5043]: I1125 09:45:04.405586 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7"] Nov 25 09:45:04 crc kubenswrapper[5043]: I1125 09:45:04.439834 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401020-92nb7"] Nov 25 09:45:04 crc kubenswrapper[5043]: I1125 09:45:04.975231 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a11ac6f6-3366-4333-98e5-dfcc01bb7ab9" path="/var/lib/kubelet/pods/a11ac6f6-3366-4333-98e5-dfcc01bb7ab9/volumes" Nov 25 09:45:05 crc kubenswrapper[5043]: I1125 09:45:05.357072 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99mdb" event={"ID":"4a4be34d-ea0d-4ac2-9971-ac90903032d1","Type":"ContainerStarted","Data":"8c6c0900be92fd1e2ebb652829a13393f5cfbf0087b3e13e9b46353257505faf"} Nov 25 09:45:05 crc kubenswrapper[5043]: I1125 09:45:05.361560 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x9p9l" event={"ID":"92edf065-65e9-46da-9cc2-ac7a63074725","Type":"ContainerStarted","Data":"e9e3b939c37043e7e1cefe565fbb2b1c9a9c1a75524de5300cbd2e3905c300e6"} Nov 25 09:45:05 crc kubenswrapper[5043]: I1125 09:45:05.387354 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-99mdb" podStartSLOduration=2.890051495 podStartE2EDuration="7.387332147s" podCreationTimestamp="2025-11-25 09:44:58 +0000 UTC" firstStartedPulling="2025-11-25 09:45:00.279887616 +0000 UTC m=+8964.448083337" lastFinishedPulling="2025-11-25 09:45:04.777168228 +0000 UTC m=+8968.945363989" observedRunningTime="2025-11-25 09:45:05.374740467 +0000 UTC m=+8969.542936208" watchObservedRunningTime="2025-11-25 09:45:05.387332147 +0000 UTC m=+8969.555527858" Nov 25 09:45:05 crc kubenswrapper[5043]: I1125 09:45:05.397029 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x9p9l" podStartSLOduration=2.802204864 podStartE2EDuration="5.397005638s" podCreationTimestamp="2025-11-25 09:45:00 +0000 UTC" firstStartedPulling="2025-11-25 09:45:02.312985858 +0000 UTC m=+8966.481181579" lastFinishedPulling="2025-11-25 09:45:04.907786632 +0000 UTC m=+8969.075982353" observedRunningTime="2025-11-25 09:45:05.396629998 +0000 UTC m=+8969.564825729" watchObservedRunningTime="2025-11-25 09:45:05.397005638 +0000 UTC m=+8969.565201359" Nov 25 09:45:08 crc kubenswrapper[5043]: I1125 09:45:08.771853 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-99mdb" Nov 25 09:45:08 crc kubenswrapper[5043]: I1125 09:45:08.772320 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-99mdb" Nov 25 09:45:08 crc kubenswrapper[5043]: I1125 09:45:08.825124 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-99mdb" Nov 25 09:45:11 crc kubenswrapper[5043]: I1125 09:45:11.387891 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x9p9l" Nov 25 09:45:11 crc kubenswrapper[5043]: I1125 09:45:11.388237 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x9p9l" Nov 25 09:45:11 crc kubenswrapper[5043]: I1125 09:45:11.437406 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x9p9l" Nov 25 09:45:11 crc kubenswrapper[5043]: I1125 09:45:11.487280 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x9p9l" Nov 25 09:45:11 crc kubenswrapper[5043]: I1125 09:45:11.993709 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x9p9l"] Nov 25 09:45:13 crc kubenswrapper[5043]: I1125 09:45:13.441479 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x9p9l" podUID="92edf065-65e9-46da-9cc2-ac7a63074725" containerName="registry-server" containerID="cri-o://e9e3b939c37043e7e1cefe565fbb2b1c9a9c1a75524de5300cbd2e3905c300e6" gracePeriod=2 Nov 25 09:45:13 crc kubenswrapper[5043]: I1125 09:45:13.939270 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x9p9l" Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.081190 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knkhm\" (UniqueName: \"kubernetes.io/projected/92edf065-65e9-46da-9cc2-ac7a63074725-kube-api-access-knkhm\") pod \"92edf065-65e9-46da-9cc2-ac7a63074725\" (UID: \"92edf065-65e9-46da-9cc2-ac7a63074725\") " Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.081399 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92edf065-65e9-46da-9cc2-ac7a63074725-catalog-content\") pod \"92edf065-65e9-46da-9cc2-ac7a63074725\" (UID: \"92edf065-65e9-46da-9cc2-ac7a63074725\") " Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.081443 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92edf065-65e9-46da-9cc2-ac7a63074725-utilities\") pod \"92edf065-65e9-46da-9cc2-ac7a63074725\" (UID: \"92edf065-65e9-46da-9cc2-ac7a63074725\") " Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.082396 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92edf065-65e9-46da-9cc2-ac7a63074725-utilities" (OuterVolumeSpecName: "utilities") pod "92edf065-65e9-46da-9cc2-ac7a63074725" (UID: "92edf065-65e9-46da-9cc2-ac7a63074725"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.113068 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92edf065-65e9-46da-9cc2-ac7a63074725-kube-api-access-knkhm" (OuterVolumeSpecName: "kube-api-access-knkhm") pod "92edf065-65e9-46da-9cc2-ac7a63074725" (UID: "92edf065-65e9-46da-9cc2-ac7a63074725"). InnerVolumeSpecName "kube-api-access-knkhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.159169 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92edf065-65e9-46da-9cc2-ac7a63074725-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92edf065-65e9-46da-9cc2-ac7a63074725" (UID: "92edf065-65e9-46da-9cc2-ac7a63074725"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.183417 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92edf065-65e9-46da-9cc2-ac7a63074725-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.183444 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92edf065-65e9-46da-9cc2-ac7a63074725-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.183454 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knkhm\" (UniqueName: \"kubernetes.io/projected/92edf065-65e9-46da-9cc2-ac7a63074725-kube-api-access-knkhm\") on node \"crc\" DevicePath \"\"" Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.463544 5043 generic.go:334] "Generic (PLEG): container finished" podID="92edf065-65e9-46da-9cc2-ac7a63074725" containerID="e9e3b939c37043e7e1cefe565fbb2b1c9a9c1a75524de5300cbd2e3905c300e6" exitCode=0 Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.463790 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x9p9l" event={"ID":"92edf065-65e9-46da-9cc2-ac7a63074725","Type":"ContainerDied","Data":"e9e3b939c37043e7e1cefe565fbb2b1c9a9c1a75524de5300cbd2e3905c300e6"} Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.463900 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x9p9l" Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.463906 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x9p9l" event={"ID":"92edf065-65e9-46da-9cc2-ac7a63074725","Type":"ContainerDied","Data":"b62a67d6992c73d9381a7df986832d77034d4050aaa9b809230c493265ea291e"} Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.463939 5043 scope.go:117] "RemoveContainer" containerID="e9e3b939c37043e7e1cefe565fbb2b1c9a9c1a75524de5300cbd2e3905c300e6" Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.500038 5043 scope.go:117] "RemoveContainer" containerID="529f25a5869ffecb39bfccbb57f437836269e4b3f04427975321d4286c37564b" Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.505251 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x9p9l"] Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.524650 5043 scope.go:117] "RemoveContainer" containerID="886bc2b12f078018a1a51b3b281b8e77d610fe50b1c484e4ead8ec18426c8369" Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.529667 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x9p9l"] Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.576302 5043 scope.go:117] "RemoveContainer" containerID="e9e3b939c37043e7e1cefe565fbb2b1c9a9c1a75524de5300cbd2e3905c300e6" Nov 25 09:45:14 crc kubenswrapper[5043]: E1125 09:45:14.580587 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9e3b939c37043e7e1cefe565fbb2b1c9a9c1a75524de5300cbd2e3905c300e6\": container with ID starting with e9e3b939c37043e7e1cefe565fbb2b1c9a9c1a75524de5300cbd2e3905c300e6 not found: ID does not exist" containerID="e9e3b939c37043e7e1cefe565fbb2b1c9a9c1a75524de5300cbd2e3905c300e6" Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.580661 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9e3b939c37043e7e1cefe565fbb2b1c9a9c1a75524de5300cbd2e3905c300e6"} err="failed to get container status \"e9e3b939c37043e7e1cefe565fbb2b1c9a9c1a75524de5300cbd2e3905c300e6\": rpc error: code = NotFound desc = could not find container \"e9e3b939c37043e7e1cefe565fbb2b1c9a9c1a75524de5300cbd2e3905c300e6\": container with ID starting with e9e3b939c37043e7e1cefe565fbb2b1c9a9c1a75524de5300cbd2e3905c300e6 not found: ID does not exist" Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.580697 5043 scope.go:117] "RemoveContainer" containerID="529f25a5869ffecb39bfccbb57f437836269e4b3f04427975321d4286c37564b" Nov 25 09:45:14 crc kubenswrapper[5043]: E1125 09:45:14.581271 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"529f25a5869ffecb39bfccbb57f437836269e4b3f04427975321d4286c37564b\": container with ID starting with 529f25a5869ffecb39bfccbb57f437836269e4b3f04427975321d4286c37564b not found: ID does not exist" containerID="529f25a5869ffecb39bfccbb57f437836269e4b3f04427975321d4286c37564b" Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.581323 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"529f25a5869ffecb39bfccbb57f437836269e4b3f04427975321d4286c37564b"} err="failed to get container status \"529f25a5869ffecb39bfccbb57f437836269e4b3f04427975321d4286c37564b\": rpc error: code = NotFound desc = could not find container \"529f25a5869ffecb39bfccbb57f437836269e4b3f04427975321d4286c37564b\": container with ID starting with 529f25a5869ffecb39bfccbb57f437836269e4b3f04427975321d4286c37564b not found: ID does not exist" Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.581356 5043 scope.go:117] "RemoveContainer" containerID="886bc2b12f078018a1a51b3b281b8e77d610fe50b1c484e4ead8ec18426c8369" Nov 25 09:45:14 crc kubenswrapper[5043]: E1125 09:45:14.582290 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"886bc2b12f078018a1a51b3b281b8e77d610fe50b1c484e4ead8ec18426c8369\": container with ID starting with 886bc2b12f078018a1a51b3b281b8e77d610fe50b1c484e4ead8ec18426c8369 not found: ID does not exist" containerID="886bc2b12f078018a1a51b3b281b8e77d610fe50b1c484e4ead8ec18426c8369" Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.582339 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"886bc2b12f078018a1a51b3b281b8e77d610fe50b1c484e4ead8ec18426c8369"} err="failed to get container status \"886bc2b12f078018a1a51b3b281b8e77d610fe50b1c484e4ead8ec18426c8369\": rpc error: code = NotFound desc = could not find container \"886bc2b12f078018a1a51b3b281b8e77d610fe50b1c484e4ead8ec18426c8369\": container with ID starting with 886bc2b12f078018a1a51b3b281b8e77d610fe50b1c484e4ead8ec18426c8369 not found: ID does not exist" Nov 25 09:45:14 crc kubenswrapper[5043]: I1125 09:45:14.972695 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92edf065-65e9-46da-9cc2-ac7a63074725" path="/var/lib/kubelet/pods/92edf065-65e9-46da-9cc2-ac7a63074725/volumes" Nov 25 09:45:17 crc kubenswrapper[5043]: I1125 09:45:17.276283 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:45:17 crc kubenswrapper[5043]: I1125 09:45:17.276547 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:45:17 crc kubenswrapper[5043]: I1125 09:45:17.276589 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 09:45:17 crc kubenswrapper[5043]: I1125 09:45:17.277374 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a130716f04cf2be78c4090558741c401be0edb499b64015e99218420408c0fd5"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:45:17 crc kubenswrapper[5043]: I1125 09:45:17.277427 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://a130716f04cf2be78c4090558741c401be0edb499b64015e99218420408c0fd5" gracePeriod=600 Nov 25 09:45:18 crc kubenswrapper[5043]: I1125 09:45:18.509789 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="a130716f04cf2be78c4090558741c401be0edb499b64015e99218420408c0fd5" exitCode=0 Nov 25 09:45:18 crc kubenswrapper[5043]: I1125 09:45:18.509943 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"a130716f04cf2be78c4090558741c401be0edb499b64015e99218420408c0fd5"} Nov 25 09:45:18 crc kubenswrapper[5043]: I1125 09:45:18.509970 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70"} Nov 25 09:45:18 crc kubenswrapper[5043]: I1125 09:45:18.509989 5043 scope.go:117] "RemoveContainer" containerID="9fab471056e3ab1677ed12d487a3dd4511894f93d84a96b96b67228e8efdc880" Nov 25 09:45:18 crc kubenswrapper[5043]: I1125 09:45:18.829545 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-99mdb" Nov 25 09:45:18 crc kubenswrapper[5043]: I1125 09:45:18.894921 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-99mdb"] Nov 25 09:45:19 crc kubenswrapper[5043]: I1125 09:45:19.522524 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-99mdb" podUID="4a4be34d-ea0d-4ac2-9971-ac90903032d1" containerName="registry-server" containerID="cri-o://8c6c0900be92fd1e2ebb652829a13393f5cfbf0087b3e13e9b46353257505faf" gracePeriod=2 Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.039640 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99mdb" Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.093124 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4be34d-ea0d-4ac2-9971-ac90903032d1-catalog-content\") pod \"4a4be34d-ea0d-4ac2-9971-ac90903032d1\" (UID: \"4a4be34d-ea0d-4ac2-9971-ac90903032d1\") " Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.093177 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4be34d-ea0d-4ac2-9971-ac90903032d1-utilities\") pod \"4a4be34d-ea0d-4ac2-9971-ac90903032d1\" (UID: \"4a4be34d-ea0d-4ac2-9971-ac90903032d1\") " Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.093218 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgzbj\" (UniqueName: \"kubernetes.io/projected/4a4be34d-ea0d-4ac2-9971-ac90903032d1-kube-api-access-cgzbj\") pod \"4a4be34d-ea0d-4ac2-9971-ac90903032d1\" (UID: \"4a4be34d-ea0d-4ac2-9971-ac90903032d1\") " Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.102647 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a4be34d-ea0d-4ac2-9971-ac90903032d1-utilities" (OuterVolumeSpecName: "utilities") pod "4a4be34d-ea0d-4ac2-9971-ac90903032d1" (UID: "4a4be34d-ea0d-4ac2-9971-ac90903032d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.108334 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a4be34d-ea0d-4ac2-9971-ac90903032d1-kube-api-access-cgzbj" (OuterVolumeSpecName: "kube-api-access-cgzbj") pod "4a4be34d-ea0d-4ac2-9971-ac90903032d1" (UID: "4a4be34d-ea0d-4ac2-9971-ac90903032d1"). InnerVolumeSpecName "kube-api-access-cgzbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.175822 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a4be34d-ea0d-4ac2-9971-ac90903032d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a4be34d-ea0d-4ac2-9971-ac90903032d1" (UID: "4a4be34d-ea0d-4ac2-9971-ac90903032d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.196503 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a4be34d-ea0d-4ac2-9971-ac90903032d1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.196542 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a4be34d-ea0d-4ac2-9971-ac90903032d1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.196552 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgzbj\" (UniqueName: \"kubernetes.io/projected/4a4be34d-ea0d-4ac2-9971-ac90903032d1-kube-api-access-cgzbj\") on node \"crc\" DevicePath \"\"" Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.534298 5043 generic.go:334] "Generic (PLEG): container finished" podID="4a4be34d-ea0d-4ac2-9971-ac90903032d1" containerID="8c6c0900be92fd1e2ebb652829a13393f5cfbf0087b3e13e9b46353257505faf" exitCode=0 Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.534382 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-99mdb" Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.534404 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99mdb" event={"ID":"4a4be34d-ea0d-4ac2-9971-ac90903032d1","Type":"ContainerDied","Data":"8c6c0900be92fd1e2ebb652829a13393f5cfbf0087b3e13e9b46353257505faf"} Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.534767 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-99mdb" event={"ID":"4a4be34d-ea0d-4ac2-9971-ac90903032d1","Type":"ContainerDied","Data":"c690ce5603d49f8c844780ed8eef917b08a2249c61456e77ddd07700c9f74a23"} Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.534796 5043 scope.go:117] "RemoveContainer" containerID="8c6c0900be92fd1e2ebb652829a13393f5cfbf0087b3e13e9b46353257505faf" Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.556179 5043 scope.go:117] "RemoveContainer" containerID="8ee3fa5bff35e450bac3038e5e55879d5a9ec6c4b94ab592c862693fb317fbf7" Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.580298 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-99mdb"] Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.585990 5043 scope.go:117] "RemoveContainer" containerID="c1b7d8c6b8586e6c03b85a11d8183b0c5d41461ee011311848e31e4d380a5d48" Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.595222 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-99mdb"] Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.630704 5043 scope.go:117] "RemoveContainer" containerID="8c6c0900be92fd1e2ebb652829a13393f5cfbf0087b3e13e9b46353257505faf" Nov 25 09:45:20 crc kubenswrapper[5043]: E1125 09:45:20.631186 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c6c0900be92fd1e2ebb652829a13393f5cfbf0087b3e13e9b46353257505faf\": container with ID starting with 8c6c0900be92fd1e2ebb652829a13393f5cfbf0087b3e13e9b46353257505faf not found: ID does not exist" containerID="8c6c0900be92fd1e2ebb652829a13393f5cfbf0087b3e13e9b46353257505faf" Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.631225 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6c0900be92fd1e2ebb652829a13393f5cfbf0087b3e13e9b46353257505faf"} err="failed to get container status \"8c6c0900be92fd1e2ebb652829a13393f5cfbf0087b3e13e9b46353257505faf\": rpc error: code = NotFound desc = could not find container \"8c6c0900be92fd1e2ebb652829a13393f5cfbf0087b3e13e9b46353257505faf\": container with ID starting with 8c6c0900be92fd1e2ebb652829a13393f5cfbf0087b3e13e9b46353257505faf not found: ID does not exist" Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.631251 5043 scope.go:117] "RemoveContainer" containerID="8ee3fa5bff35e450bac3038e5e55879d5a9ec6c4b94ab592c862693fb317fbf7" Nov 25 09:45:20 crc kubenswrapper[5043]: E1125 09:45:20.631588 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ee3fa5bff35e450bac3038e5e55879d5a9ec6c4b94ab592c862693fb317fbf7\": container with ID starting with 8ee3fa5bff35e450bac3038e5e55879d5a9ec6c4b94ab592c862693fb317fbf7 not found: ID does not exist" containerID="8ee3fa5bff35e450bac3038e5e55879d5a9ec6c4b94ab592c862693fb317fbf7" Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.631622 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ee3fa5bff35e450bac3038e5e55879d5a9ec6c4b94ab592c862693fb317fbf7"} err="failed to get container status \"8ee3fa5bff35e450bac3038e5e55879d5a9ec6c4b94ab592c862693fb317fbf7\": rpc error: code = NotFound desc = could not find container \"8ee3fa5bff35e450bac3038e5e55879d5a9ec6c4b94ab592c862693fb317fbf7\": container with ID starting with 8ee3fa5bff35e450bac3038e5e55879d5a9ec6c4b94ab592c862693fb317fbf7 not found: ID does not exist" Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.631635 5043 scope.go:117] "RemoveContainer" containerID="c1b7d8c6b8586e6c03b85a11d8183b0c5d41461ee011311848e31e4d380a5d48" Nov 25 09:45:20 crc kubenswrapper[5043]: E1125 09:45:20.631841 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1b7d8c6b8586e6c03b85a11d8183b0c5d41461ee011311848e31e4d380a5d48\": container with ID starting with c1b7d8c6b8586e6c03b85a11d8183b0c5d41461ee011311848e31e4d380a5d48 not found: ID does not exist" containerID="c1b7d8c6b8586e6c03b85a11d8183b0c5d41461ee011311848e31e4d380a5d48" Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.631873 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1b7d8c6b8586e6c03b85a11d8183b0c5d41461ee011311848e31e4d380a5d48"} err="failed to get container status \"c1b7d8c6b8586e6c03b85a11d8183b0c5d41461ee011311848e31e4d380a5d48\": rpc error: code = NotFound desc = could not find container \"c1b7d8c6b8586e6c03b85a11d8183b0c5d41461ee011311848e31e4d380a5d48\": container with ID starting with c1b7d8c6b8586e6c03b85a11d8183b0c5d41461ee011311848e31e4d380a5d48 not found: ID does not exist" Nov 25 09:45:20 crc kubenswrapper[5043]: I1125 09:45:20.974153 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a4be34d-ea0d-4ac2-9971-ac90903032d1" path="/var/lib/kubelet/pods/4a4be34d-ea0d-4ac2-9971-ac90903032d1/volumes" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.352220 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v2z9h/must-gather-dztv7"] Nov 25 09:45:51 crc kubenswrapper[5043]: E1125 09:45:51.353123 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a4be34d-ea0d-4ac2-9971-ac90903032d1" containerName="extract-content" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.353136 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a4be34d-ea0d-4ac2-9971-ac90903032d1" containerName="extract-content" Nov 25 09:45:51 crc kubenswrapper[5043]: E1125 09:45:51.353157 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a4be34d-ea0d-4ac2-9971-ac90903032d1" containerName="registry-server" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.353163 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a4be34d-ea0d-4ac2-9971-ac90903032d1" containerName="registry-server" Nov 25 09:45:51 crc kubenswrapper[5043]: E1125 09:45:51.353175 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92edf065-65e9-46da-9cc2-ac7a63074725" containerName="registry-server" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.353181 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="92edf065-65e9-46da-9cc2-ac7a63074725" containerName="registry-server" Nov 25 09:45:51 crc kubenswrapper[5043]: E1125 09:45:51.353206 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92edf065-65e9-46da-9cc2-ac7a63074725" containerName="extract-content" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.353212 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="92edf065-65e9-46da-9cc2-ac7a63074725" containerName="extract-content" Nov 25 09:45:51 crc kubenswrapper[5043]: E1125 09:45:51.353220 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7107af-afc3-4c0b-bb84-f71fa8cf7751" containerName="collect-profiles" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.353225 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7107af-afc3-4c0b-bb84-f71fa8cf7751" containerName="collect-profiles" Nov 25 09:45:51 crc kubenswrapper[5043]: E1125 09:45:51.353250 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92edf065-65e9-46da-9cc2-ac7a63074725" containerName="extract-utilities" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.353256 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="92edf065-65e9-46da-9cc2-ac7a63074725" containerName="extract-utilities" Nov 25 09:45:51 crc kubenswrapper[5043]: E1125 09:45:51.353264 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a4be34d-ea0d-4ac2-9971-ac90903032d1" containerName="extract-utilities" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.353270 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a4be34d-ea0d-4ac2-9971-ac90903032d1" containerName="extract-utilities" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.353447 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="92edf065-65e9-46da-9cc2-ac7a63074725" containerName="registry-server" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.353466 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7107af-afc3-4c0b-bb84-f71fa8cf7751" containerName="collect-profiles" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.353479 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a4be34d-ea0d-4ac2-9971-ac90903032d1" containerName="registry-server" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.354465 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v2z9h/must-gather-dztv7" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.357623 5043 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-v2z9h"/"default-dockercfg-cpxn6" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.358205 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v2z9h"/"kube-root-ca.crt" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.358374 5043 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v2z9h"/"openshift-service-ca.crt" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.362083 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v2z9h/must-gather-dztv7"] Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.428278 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg7lj\" (UniqueName: \"kubernetes.io/projected/e6d8ca35-68e8-408a-8afc-41261daaab5d-kube-api-access-cg7lj\") pod \"must-gather-dztv7\" (UID: \"e6d8ca35-68e8-408a-8afc-41261daaab5d\") " pod="openshift-must-gather-v2z9h/must-gather-dztv7" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.428341 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6d8ca35-68e8-408a-8afc-41261daaab5d-must-gather-output\") pod \"must-gather-dztv7\" (UID: \"e6d8ca35-68e8-408a-8afc-41261daaab5d\") " pod="openshift-must-gather-v2z9h/must-gather-dztv7" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.529797 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg7lj\" (UniqueName: \"kubernetes.io/projected/e6d8ca35-68e8-408a-8afc-41261daaab5d-kube-api-access-cg7lj\") pod \"must-gather-dztv7\" (UID: \"e6d8ca35-68e8-408a-8afc-41261daaab5d\") " pod="openshift-must-gather-v2z9h/must-gather-dztv7" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.529863 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6d8ca35-68e8-408a-8afc-41261daaab5d-must-gather-output\") pod \"must-gather-dztv7\" (UID: \"e6d8ca35-68e8-408a-8afc-41261daaab5d\") " pod="openshift-must-gather-v2z9h/must-gather-dztv7" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.530427 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6d8ca35-68e8-408a-8afc-41261daaab5d-must-gather-output\") pod \"must-gather-dztv7\" (UID: \"e6d8ca35-68e8-408a-8afc-41261daaab5d\") " pod="openshift-must-gather-v2z9h/must-gather-dztv7" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.578330 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg7lj\" (UniqueName: \"kubernetes.io/projected/e6d8ca35-68e8-408a-8afc-41261daaab5d-kube-api-access-cg7lj\") pod \"must-gather-dztv7\" (UID: \"e6d8ca35-68e8-408a-8afc-41261daaab5d\") " pod="openshift-must-gather-v2z9h/must-gather-dztv7" Nov 25 09:45:51 crc kubenswrapper[5043]: I1125 09:45:51.671289 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v2z9h/must-gather-dztv7" Nov 25 09:45:52 crc kubenswrapper[5043]: I1125 09:45:52.157863 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v2z9h/must-gather-dztv7"] Nov 25 09:45:52 crc kubenswrapper[5043]: I1125 09:45:52.857944 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v2z9h/must-gather-dztv7" event={"ID":"e6d8ca35-68e8-408a-8afc-41261daaab5d","Type":"ContainerStarted","Data":"42ea2621f4bc02ddbe8eaf4c25d559e8b7a1f37ebb6ac5234f7346fa986956f6"} Nov 25 09:45:52 crc kubenswrapper[5043]: I1125 09:45:52.858274 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v2z9h/must-gather-dztv7" event={"ID":"e6d8ca35-68e8-408a-8afc-41261daaab5d","Type":"ContainerStarted","Data":"337fc63fbdb65f0716d39efe59a69b71c0173cd2fb633c46e28021f0d2bee904"} Nov 25 09:45:52 crc kubenswrapper[5043]: I1125 09:45:52.858286 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v2z9h/must-gather-dztv7" event={"ID":"e6d8ca35-68e8-408a-8afc-41261daaab5d","Type":"ContainerStarted","Data":"3077581c44c4f93e1182ab20151a9bd9fbd1f605690586e5bb019608a267536f"} Nov 25 09:45:52 crc kubenswrapper[5043]: I1125 09:45:52.886737 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v2z9h/must-gather-dztv7" podStartSLOduration=1.8867174979999999 podStartE2EDuration="1.886717498s" podCreationTimestamp="2025-11-25 09:45:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:45:52.87900084 +0000 UTC m=+9017.047196561" watchObservedRunningTime="2025-11-25 09:45:52.886717498 +0000 UTC m=+9017.054913219" Nov 25 09:45:55 crc kubenswrapper[5043]: E1125 09:45:55.615864 5043 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.162:40162->38.102.83.162:35317: write tcp 38.102.83.162:40162->38.102.83.162:35317: write: broken pipe Nov 25 09:45:56 crc kubenswrapper[5043]: I1125 09:45:56.458467 5043 scope.go:117] "RemoveContainer" containerID="26524a591e6928960443e301eab43d6a0419102493d028a0a3bbe91cce5cbc3a" Nov 25 09:45:56 crc kubenswrapper[5043]: I1125 09:45:56.746032 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v2z9h/crc-debug-tdb6g"] Nov 25 09:45:56 crc kubenswrapper[5043]: I1125 09:45:56.747338 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v2z9h/crc-debug-tdb6g" Nov 25 09:45:56 crc kubenswrapper[5043]: I1125 09:45:56.874315 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rznvc\" (UniqueName: \"kubernetes.io/projected/507ca654-8ee6-4127-9168-6f3952ccdf3d-kube-api-access-rznvc\") pod \"crc-debug-tdb6g\" (UID: \"507ca654-8ee6-4127-9168-6f3952ccdf3d\") " pod="openshift-must-gather-v2z9h/crc-debug-tdb6g" Nov 25 09:45:56 crc kubenswrapper[5043]: I1125 09:45:56.874700 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/507ca654-8ee6-4127-9168-6f3952ccdf3d-host\") pod \"crc-debug-tdb6g\" (UID: \"507ca654-8ee6-4127-9168-6f3952ccdf3d\") " pod="openshift-must-gather-v2z9h/crc-debug-tdb6g" Nov 25 09:45:56 crc kubenswrapper[5043]: I1125 09:45:56.976016 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rznvc\" (UniqueName: \"kubernetes.io/projected/507ca654-8ee6-4127-9168-6f3952ccdf3d-kube-api-access-rznvc\") pod \"crc-debug-tdb6g\" (UID: \"507ca654-8ee6-4127-9168-6f3952ccdf3d\") " pod="openshift-must-gather-v2z9h/crc-debug-tdb6g" Nov 25 09:45:56 crc kubenswrapper[5043]: I1125 09:45:56.976071 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/507ca654-8ee6-4127-9168-6f3952ccdf3d-host\") pod \"crc-debug-tdb6g\" (UID: \"507ca654-8ee6-4127-9168-6f3952ccdf3d\") " pod="openshift-must-gather-v2z9h/crc-debug-tdb6g" Nov 25 09:45:56 crc kubenswrapper[5043]: I1125 09:45:56.976266 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/507ca654-8ee6-4127-9168-6f3952ccdf3d-host\") pod \"crc-debug-tdb6g\" (UID: \"507ca654-8ee6-4127-9168-6f3952ccdf3d\") " pod="openshift-must-gather-v2z9h/crc-debug-tdb6g" Nov 25 09:45:56 crc kubenswrapper[5043]: I1125 09:45:56.999482 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rznvc\" (UniqueName: \"kubernetes.io/projected/507ca654-8ee6-4127-9168-6f3952ccdf3d-kube-api-access-rznvc\") pod \"crc-debug-tdb6g\" (UID: \"507ca654-8ee6-4127-9168-6f3952ccdf3d\") " pod="openshift-must-gather-v2z9h/crc-debug-tdb6g" Nov 25 09:45:57 crc kubenswrapper[5043]: I1125 09:45:57.062803 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v2z9h/crc-debug-tdb6g" Nov 25 09:45:57 crc kubenswrapper[5043]: W1125 09:45:57.112176 5043 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod507ca654_8ee6_4127_9168_6f3952ccdf3d.slice/crio-8ab270d5176b3889ac5db5f52af94eb067f9c332c0f13559cbf6b33204e1e880 WatchSource:0}: Error finding container 8ab270d5176b3889ac5db5f52af94eb067f9c332c0f13559cbf6b33204e1e880: Status 404 returned error can't find the container with id 8ab270d5176b3889ac5db5f52af94eb067f9c332c0f13559cbf6b33204e1e880 Nov 25 09:45:57 crc kubenswrapper[5043]: I1125 09:45:57.901378 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v2z9h/crc-debug-tdb6g" event={"ID":"507ca654-8ee6-4127-9168-6f3952ccdf3d","Type":"ContainerStarted","Data":"f1d9371a5333c94901d454f5396dac57d7c37b081e3a39efb43084cdf355e986"} Nov 25 09:45:57 crc kubenswrapper[5043]: I1125 09:45:57.901910 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v2z9h/crc-debug-tdb6g" event={"ID":"507ca654-8ee6-4127-9168-6f3952ccdf3d","Type":"ContainerStarted","Data":"8ab270d5176b3889ac5db5f52af94eb067f9c332c0f13559cbf6b33204e1e880"} Nov 25 09:45:57 crc kubenswrapper[5043]: I1125 09:45:57.921416 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v2z9h/crc-debug-tdb6g" podStartSLOduration=1.9213986969999999 podStartE2EDuration="1.921398697s" podCreationTimestamp="2025-11-25 09:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 09:45:57.914962103 +0000 UTC m=+9022.083157824" watchObservedRunningTime="2025-11-25 09:45:57.921398697 +0000 UTC m=+9022.089594418" Nov 25 09:46:09 crc kubenswrapper[5043]: E1125 09:46:09.962979 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:46:40 crc kubenswrapper[5043]: I1125 09:46:40.279296 5043 generic.go:334] "Generic (PLEG): container finished" podID="507ca654-8ee6-4127-9168-6f3952ccdf3d" containerID="f1d9371a5333c94901d454f5396dac57d7c37b081e3a39efb43084cdf355e986" exitCode=0 Nov 25 09:46:40 crc kubenswrapper[5043]: I1125 09:46:40.279866 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v2z9h/crc-debug-tdb6g" event={"ID":"507ca654-8ee6-4127-9168-6f3952ccdf3d","Type":"ContainerDied","Data":"f1d9371a5333c94901d454f5396dac57d7c37b081e3a39efb43084cdf355e986"} Nov 25 09:46:41 crc kubenswrapper[5043]: I1125 09:46:41.416150 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v2z9h/crc-debug-tdb6g" Nov 25 09:46:41 crc kubenswrapper[5043]: I1125 09:46:41.461021 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v2z9h/crc-debug-tdb6g"] Nov 25 09:46:41 crc kubenswrapper[5043]: I1125 09:46:41.472466 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v2z9h/crc-debug-tdb6g"] Nov 25 09:46:41 crc kubenswrapper[5043]: I1125 09:46:41.474361 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/507ca654-8ee6-4127-9168-6f3952ccdf3d-host\") pod \"507ca654-8ee6-4127-9168-6f3952ccdf3d\" (UID: \"507ca654-8ee6-4127-9168-6f3952ccdf3d\") " Nov 25 09:46:41 crc kubenswrapper[5043]: I1125 09:46:41.474427 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rznvc\" (UniqueName: \"kubernetes.io/projected/507ca654-8ee6-4127-9168-6f3952ccdf3d-kube-api-access-rznvc\") pod \"507ca654-8ee6-4127-9168-6f3952ccdf3d\" (UID: \"507ca654-8ee6-4127-9168-6f3952ccdf3d\") " Nov 25 09:46:41 crc kubenswrapper[5043]: I1125 09:46:41.474438 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/507ca654-8ee6-4127-9168-6f3952ccdf3d-host" (OuterVolumeSpecName: "host") pod "507ca654-8ee6-4127-9168-6f3952ccdf3d" (UID: "507ca654-8ee6-4127-9168-6f3952ccdf3d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:46:41 crc kubenswrapper[5043]: I1125 09:46:41.474956 5043 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/507ca654-8ee6-4127-9168-6f3952ccdf3d-host\") on node \"crc\" DevicePath \"\"" Nov 25 09:46:41 crc kubenswrapper[5043]: I1125 09:46:41.481365 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507ca654-8ee6-4127-9168-6f3952ccdf3d-kube-api-access-rznvc" (OuterVolumeSpecName: "kube-api-access-rznvc") pod "507ca654-8ee6-4127-9168-6f3952ccdf3d" (UID: "507ca654-8ee6-4127-9168-6f3952ccdf3d"). InnerVolumeSpecName "kube-api-access-rznvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:46:41 crc kubenswrapper[5043]: I1125 09:46:41.576920 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rznvc\" (UniqueName: \"kubernetes.io/projected/507ca654-8ee6-4127-9168-6f3952ccdf3d-kube-api-access-rznvc\") on node \"crc\" DevicePath \"\"" Nov 25 09:46:42 crc kubenswrapper[5043]: I1125 09:46:42.303168 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ab270d5176b3889ac5db5f52af94eb067f9c332c0f13559cbf6b33204e1e880" Nov 25 09:46:42 crc kubenswrapper[5043]: I1125 09:46:42.303252 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v2z9h/crc-debug-tdb6g" Nov 25 09:46:42 crc kubenswrapper[5043]: I1125 09:46:42.670209 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v2z9h/crc-debug-dgzvh"] Nov 25 09:46:42 crc kubenswrapper[5043]: E1125 09:46:42.670780 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507ca654-8ee6-4127-9168-6f3952ccdf3d" containerName="container-00" Nov 25 09:46:42 crc kubenswrapper[5043]: I1125 09:46:42.670800 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="507ca654-8ee6-4127-9168-6f3952ccdf3d" containerName="container-00" Nov 25 09:46:42 crc kubenswrapper[5043]: I1125 09:46:42.671097 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="507ca654-8ee6-4127-9168-6f3952ccdf3d" containerName="container-00" Nov 25 09:46:42 crc kubenswrapper[5043]: I1125 09:46:42.671936 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v2z9h/crc-debug-dgzvh" Nov 25 09:46:42 crc kubenswrapper[5043]: I1125 09:46:42.702675 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5e1387e-b53d-4b02-aa12-1cac81e40695-host\") pod \"crc-debug-dgzvh\" (UID: \"f5e1387e-b53d-4b02-aa12-1cac81e40695\") " pod="openshift-must-gather-v2z9h/crc-debug-dgzvh" Nov 25 09:46:42 crc kubenswrapper[5043]: I1125 09:46:42.702876 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm8qd\" (UniqueName: \"kubernetes.io/projected/f5e1387e-b53d-4b02-aa12-1cac81e40695-kube-api-access-tm8qd\") pod \"crc-debug-dgzvh\" (UID: \"f5e1387e-b53d-4b02-aa12-1cac81e40695\") " pod="openshift-must-gather-v2z9h/crc-debug-dgzvh" Nov 25 09:46:42 crc kubenswrapper[5043]: I1125 09:46:42.805300 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5e1387e-b53d-4b02-aa12-1cac81e40695-host\") pod \"crc-debug-dgzvh\" (UID: \"f5e1387e-b53d-4b02-aa12-1cac81e40695\") " pod="openshift-must-gather-v2z9h/crc-debug-dgzvh" Nov 25 09:46:42 crc kubenswrapper[5043]: I1125 09:46:42.805368 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm8qd\" (UniqueName: \"kubernetes.io/projected/f5e1387e-b53d-4b02-aa12-1cac81e40695-kube-api-access-tm8qd\") pod \"crc-debug-dgzvh\" (UID: \"f5e1387e-b53d-4b02-aa12-1cac81e40695\") " pod="openshift-must-gather-v2z9h/crc-debug-dgzvh" Nov 25 09:46:42 crc kubenswrapper[5043]: I1125 09:46:42.805510 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5e1387e-b53d-4b02-aa12-1cac81e40695-host\") pod \"crc-debug-dgzvh\" (UID: \"f5e1387e-b53d-4b02-aa12-1cac81e40695\") " pod="openshift-must-gather-v2z9h/crc-debug-dgzvh" Nov 25 09:46:42 crc kubenswrapper[5043]: I1125 09:46:42.828633 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm8qd\" (UniqueName: \"kubernetes.io/projected/f5e1387e-b53d-4b02-aa12-1cac81e40695-kube-api-access-tm8qd\") pod \"crc-debug-dgzvh\" (UID: \"f5e1387e-b53d-4b02-aa12-1cac81e40695\") " pod="openshift-must-gather-v2z9h/crc-debug-dgzvh" Nov 25 09:46:42 crc kubenswrapper[5043]: I1125 09:46:42.975694 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="507ca654-8ee6-4127-9168-6f3952ccdf3d" path="/var/lib/kubelet/pods/507ca654-8ee6-4127-9168-6f3952ccdf3d/volumes" Nov 25 09:46:42 crc kubenswrapper[5043]: I1125 09:46:42.995066 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v2z9h/crc-debug-dgzvh" Nov 25 09:46:43 crc kubenswrapper[5043]: I1125 09:46:43.314965 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v2z9h/crc-debug-dgzvh" event={"ID":"f5e1387e-b53d-4b02-aa12-1cac81e40695","Type":"ContainerStarted","Data":"725a0e5a1e2991e4ad8c5698e97074345a56d96828054e15a901f0803ca52a1a"} Nov 25 09:46:43 crc kubenswrapper[5043]: I1125 09:46:43.315359 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v2z9h/crc-debug-dgzvh" event={"ID":"f5e1387e-b53d-4b02-aa12-1cac81e40695","Type":"ContainerStarted","Data":"a643cb4cbfd7a00c023b26eea8f5e67487339fe516d09b4ca764891592ca37bd"} Nov 25 09:46:44 crc kubenswrapper[5043]: I1125 09:46:44.327597 5043 generic.go:334] "Generic (PLEG): container finished" podID="f5e1387e-b53d-4b02-aa12-1cac81e40695" containerID="725a0e5a1e2991e4ad8c5698e97074345a56d96828054e15a901f0803ca52a1a" exitCode=0 Nov 25 09:46:44 crc kubenswrapper[5043]: I1125 09:46:44.327675 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v2z9h/crc-debug-dgzvh" event={"ID":"f5e1387e-b53d-4b02-aa12-1cac81e40695","Type":"ContainerDied","Data":"725a0e5a1e2991e4ad8c5698e97074345a56d96828054e15a901f0803ca52a1a"} Nov 25 09:46:45 crc kubenswrapper[5043]: I1125 09:46:45.458470 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v2z9h/crc-debug-dgzvh" Nov 25 09:46:45 crc kubenswrapper[5043]: I1125 09:46:45.485417 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm8qd\" (UniqueName: \"kubernetes.io/projected/f5e1387e-b53d-4b02-aa12-1cac81e40695-kube-api-access-tm8qd\") pod \"f5e1387e-b53d-4b02-aa12-1cac81e40695\" (UID: \"f5e1387e-b53d-4b02-aa12-1cac81e40695\") " Nov 25 09:46:45 crc kubenswrapper[5043]: I1125 09:46:45.485768 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5e1387e-b53d-4b02-aa12-1cac81e40695-host\") pod \"f5e1387e-b53d-4b02-aa12-1cac81e40695\" (UID: \"f5e1387e-b53d-4b02-aa12-1cac81e40695\") " Nov 25 09:46:45 crc kubenswrapper[5043]: I1125 09:46:45.486404 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5e1387e-b53d-4b02-aa12-1cac81e40695-host" (OuterVolumeSpecName: "host") pod "f5e1387e-b53d-4b02-aa12-1cac81e40695" (UID: "f5e1387e-b53d-4b02-aa12-1cac81e40695"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:46:45 crc kubenswrapper[5043]: I1125 09:46:45.491112 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5e1387e-b53d-4b02-aa12-1cac81e40695-kube-api-access-tm8qd" (OuterVolumeSpecName: "kube-api-access-tm8qd") pod "f5e1387e-b53d-4b02-aa12-1cac81e40695" (UID: "f5e1387e-b53d-4b02-aa12-1cac81e40695"). InnerVolumeSpecName "kube-api-access-tm8qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:46:45 crc kubenswrapper[5043]: I1125 09:46:45.587083 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm8qd\" (UniqueName: \"kubernetes.io/projected/f5e1387e-b53d-4b02-aa12-1cac81e40695-kube-api-access-tm8qd\") on node \"crc\" DevicePath \"\"" Nov 25 09:46:45 crc kubenswrapper[5043]: I1125 09:46:45.587113 5043 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f5e1387e-b53d-4b02-aa12-1cac81e40695-host\") on node \"crc\" DevicePath \"\"" Nov 25 09:46:46 crc kubenswrapper[5043]: I1125 09:46:46.347970 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v2z9h/crc-debug-dgzvh" event={"ID":"f5e1387e-b53d-4b02-aa12-1cac81e40695","Type":"ContainerDied","Data":"a643cb4cbfd7a00c023b26eea8f5e67487339fe516d09b4ca764891592ca37bd"} Nov 25 09:46:46 crc kubenswrapper[5043]: I1125 09:46:46.348408 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a643cb4cbfd7a00c023b26eea8f5e67487339fe516d09b4ca764891592ca37bd" Nov 25 09:46:46 crc kubenswrapper[5043]: I1125 09:46:46.348051 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v2z9h/crc-debug-dgzvh" Nov 25 09:46:46 crc kubenswrapper[5043]: I1125 09:46:46.860192 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v2z9h/crc-debug-dgzvh"] Nov 25 09:46:46 crc kubenswrapper[5043]: I1125 09:46:46.874371 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v2z9h/crc-debug-dgzvh"] Nov 25 09:46:46 crc kubenswrapper[5043]: I1125 09:46:46.977367 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5e1387e-b53d-4b02-aa12-1cac81e40695" path="/var/lib/kubelet/pods/f5e1387e-b53d-4b02-aa12-1cac81e40695/volumes" Nov 25 09:46:48 crc kubenswrapper[5043]: I1125 09:46:48.062890 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v2z9h/crc-debug-4jkcb"] Nov 25 09:46:48 crc kubenswrapper[5043]: E1125 09:46:48.063410 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5e1387e-b53d-4b02-aa12-1cac81e40695" containerName="container-00" Nov 25 09:46:48 crc kubenswrapper[5043]: I1125 09:46:48.063428 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5e1387e-b53d-4b02-aa12-1cac81e40695" containerName="container-00" Nov 25 09:46:48 crc kubenswrapper[5043]: I1125 09:46:48.063688 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5e1387e-b53d-4b02-aa12-1cac81e40695" containerName="container-00" Nov 25 09:46:48 crc kubenswrapper[5043]: I1125 09:46:48.064464 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v2z9h/crc-debug-4jkcb" Nov 25 09:46:48 crc kubenswrapper[5043]: I1125 09:46:48.141843 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9b1edf2-d515-45d0-8c61-e7ee07fcde20-host\") pod \"crc-debug-4jkcb\" (UID: \"a9b1edf2-d515-45d0-8c61-e7ee07fcde20\") " pod="openshift-must-gather-v2z9h/crc-debug-4jkcb" Nov 25 09:46:48 crc kubenswrapper[5043]: I1125 09:46:48.141902 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mlqr\" (UniqueName: \"kubernetes.io/projected/a9b1edf2-d515-45d0-8c61-e7ee07fcde20-kube-api-access-2mlqr\") pod \"crc-debug-4jkcb\" (UID: \"a9b1edf2-d515-45d0-8c61-e7ee07fcde20\") " pod="openshift-must-gather-v2z9h/crc-debug-4jkcb" Nov 25 09:46:48 crc kubenswrapper[5043]: I1125 09:46:48.244044 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9b1edf2-d515-45d0-8c61-e7ee07fcde20-host\") pod \"crc-debug-4jkcb\" (UID: \"a9b1edf2-d515-45d0-8c61-e7ee07fcde20\") " pod="openshift-must-gather-v2z9h/crc-debug-4jkcb" Nov 25 09:46:48 crc kubenswrapper[5043]: I1125 09:46:48.244098 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mlqr\" (UniqueName: \"kubernetes.io/projected/a9b1edf2-d515-45d0-8c61-e7ee07fcde20-kube-api-access-2mlqr\") pod \"crc-debug-4jkcb\" (UID: \"a9b1edf2-d515-45d0-8c61-e7ee07fcde20\") " pod="openshift-must-gather-v2z9h/crc-debug-4jkcb" Nov 25 09:46:48 crc kubenswrapper[5043]: I1125 09:46:48.244219 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9b1edf2-d515-45d0-8c61-e7ee07fcde20-host\") pod \"crc-debug-4jkcb\" (UID: \"a9b1edf2-d515-45d0-8c61-e7ee07fcde20\") " pod="openshift-must-gather-v2z9h/crc-debug-4jkcb" Nov 25 09:46:48 crc kubenswrapper[5043]: I1125 09:46:48.262835 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mlqr\" (UniqueName: \"kubernetes.io/projected/a9b1edf2-d515-45d0-8c61-e7ee07fcde20-kube-api-access-2mlqr\") pod \"crc-debug-4jkcb\" (UID: \"a9b1edf2-d515-45d0-8c61-e7ee07fcde20\") " pod="openshift-must-gather-v2z9h/crc-debug-4jkcb" Nov 25 09:46:48 crc kubenswrapper[5043]: I1125 09:46:48.382746 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v2z9h/crc-debug-4jkcb" Nov 25 09:46:49 crc kubenswrapper[5043]: I1125 09:46:49.378681 5043 generic.go:334] "Generic (PLEG): container finished" podID="a9b1edf2-d515-45d0-8c61-e7ee07fcde20" containerID="840329d22c049413c3d4eb8ce04ccad0bee51f1157977eb0224efdd700c0293a" exitCode=0 Nov 25 09:46:49 crc kubenswrapper[5043]: I1125 09:46:49.378818 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v2z9h/crc-debug-4jkcb" event={"ID":"a9b1edf2-d515-45d0-8c61-e7ee07fcde20","Type":"ContainerDied","Data":"840329d22c049413c3d4eb8ce04ccad0bee51f1157977eb0224efdd700c0293a"} Nov 25 09:46:49 crc kubenswrapper[5043]: I1125 09:46:49.379135 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v2z9h/crc-debug-4jkcb" event={"ID":"a9b1edf2-d515-45d0-8c61-e7ee07fcde20","Type":"ContainerStarted","Data":"a223efc55cbc9afdc4ca90c8adcf763e3ffa11e1bad4fd2b007041cfd23df473"} Nov 25 09:46:49 crc kubenswrapper[5043]: I1125 09:46:49.425041 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v2z9h/crc-debug-4jkcb"] Nov 25 09:46:49 crc kubenswrapper[5043]: I1125 09:46:49.436091 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v2z9h/crc-debug-4jkcb"] Nov 25 09:46:50 crc kubenswrapper[5043]: I1125 09:46:50.500968 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v2z9h/crc-debug-4jkcb" Nov 25 09:46:50 crc kubenswrapper[5043]: I1125 09:46:50.600705 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mlqr\" (UniqueName: \"kubernetes.io/projected/a9b1edf2-d515-45d0-8c61-e7ee07fcde20-kube-api-access-2mlqr\") pod \"a9b1edf2-d515-45d0-8c61-e7ee07fcde20\" (UID: \"a9b1edf2-d515-45d0-8c61-e7ee07fcde20\") " Nov 25 09:46:50 crc kubenswrapper[5043]: I1125 09:46:50.600820 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9b1edf2-d515-45d0-8c61-e7ee07fcde20-host\") pod \"a9b1edf2-d515-45d0-8c61-e7ee07fcde20\" (UID: \"a9b1edf2-d515-45d0-8c61-e7ee07fcde20\") " Nov 25 09:46:50 crc kubenswrapper[5043]: I1125 09:46:50.600985 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9b1edf2-d515-45d0-8c61-e7ee07fcde20-host" (OuterVolumeSpecName: "host") pod "a9b1edf2-d515-45d0-8c61-e7ee07fcde20" (UID: "a9b1edf2-d515-45d0-8c61-e7ee07fcde20"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 09:46:50 crc kubenswrapper[5043]: I1125 09:46:50.601478 5043 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9b1edf2-d515-45d0-8c61-e7ee07fcde20-host\") on node \"crc\" DevicePath \"\"" Nov 25 09:46:50 crc kubenswrapper[5043]: I1125 09:46:50.606557 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b1edf2-d515-45d0-8c61-e7ee07fcde20-kube-api-access-2mlqr" (OuterVolumeSpecName: "kube-api-access-2mlqr") pod "a9b1edf2-d515-45d0-8c61-e7ee07fcde20" (UID: "a9b1edf2-d515-45d0-8c61-e7ee07fcde20"). InnerVolumeSpecName "kube-api-access-2mlqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:46:50 crc kubenswrapper[5043]: I1125 09:46:50.702358 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mlqr\" (UniqueName: \"kubernetes.io/projected/a9b1edf2-d515-45d0-8c61-e7ee07fcde20-kube-api-access-2mlqr\") on node \"crc\" DevicePath \"\"" Nov 25 09:46:50 crc kubenswrapper[5043]: I1125 09:46:50.976414 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b1edf2-d515-45d0-8c61-e7ee07fcde20" path="/var/lib/kubelet/pods/a9b1edf2-d515-45d0-8c61-e7ee07fcde20/volumes" Nov 25 09:46:51 crc kubenswrapper[5043]: I1125 09:46:51.399330 5043 scope.go:117] "RemoveContainer" containerID="840329d22c049413c3d4eb8ce04ccad0bee51f1157977eb0224efdd700c0293a" Nov 25 09:46:51 crc kubenswrapper[5043]: I1125 09:46:51.399339 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v2z9h/crc-debug-4jkcb" Nov 25 09:47:11 crc kubenswrapper[5043]: E1125 09:47:11.962943 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:47:40 crc kubenswrapper[5043]: I1125 09:47:40.721052 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ansibletest-ansibletest_3a0103c7-8a95-4675-921c-1b9b4f295df8/ansibletest-ansibletest/0.log" Nov 25 09:47:40 crc kubenswrapper[5043]: I1125 09:47:40.843945 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7d74b989db-9zq82_6c6eae10-5480-4b54-8cf5-1fd717d00c0e/barbican-api/0.log" Nov 25 09:47:40 crc kubenswrapper[5043]: I1125 09:47:40.926979 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7d74b989db-9zq82_6c6eae10-5480-4b54-8cf5-1fd717d00c0e/barbican-api-log/0.log" Nov 25 09:47:41 crc kubenswrapper[5043]: I1125 09:47:41.185843 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54895cb446-cqmz8_f63f14b8-9c07-4267-aaa0-ceac1d775c2c/barbican-keystone-listener/0.log" Nov 25 09:47:41 crc kubenswrapper[5043]: I1125 09:47:41.388039 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6b74d9cbc5-zq8tk_e706e93d-1fc1-4969-b8ba-5ff803545131/barbican-worker/0.log" Nov 25 09:47:41 crc kubenswrapper[5043]: I1125 09:47:41.446024 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6b74d9cbc5-zq8tk_e706e93d-1fc1-4969-b8ba-5ff803545131/barbican-worker-log/0.log" Nov 25 09:47:41 crc kubenswrapper[5043]: I1125 09:47:41.672783 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-x9bmq_cd5db81c-0d4f-4c55-9539-203619adfac7/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:47:41 crc kubenswrapper[5043]: I1125 09:47:41.759633 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54895cb446-cqmz8_f63f14b8-9c07-4267-aaa0-ceac1d775c2c/barbican-keystone-listener-log/0.log" Nov 25 09:47:41 crc kubenswrapper[5043]: I1125 09:47:41.805295 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_33767a71-28b3-4d66-9f8a-4723e69cf860/ceilometer-central-agent/0.log" Nov 25 09:47:41 crc kubenswrapper[5043]: I1125 09:47:41.923304 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_33767a71-28b3-4d66-9f8a-4723e69cf860/ceilometer-notification-agent/0.log" Nov 25 09:47:41 crc kubenswrapper[5043]: I1125 09:47:41.971798 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_33767a71-28b3-4d66-9f8a-4723e69cf860/sg-core/0.log" Nov 25 09:47:41 crc kubenswrapper[5043]: I1125 09:47:41.980558 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_33767a71-28b3-4d66-9f8a-4723e69cf860/proxy-httpd/0.log" Nov 25 09:47:42 crc kubenswrapper[5043]: I1125 09:47:42.154053 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-mzlj8_99201979-af70-4d71-8e55-23a89ab8c5ab/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:47:42 crc kubenswrapper[5043]: I1125 09:47:42.182819 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-xmxnm_c283f059-0d72-42e3-bce6-cfdab8692e63/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:47:42 crc kubenswrapper[5043]: I1125 09:47:42.877307 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9afb63aa-ce0f-4365-a4cb-4fd593537095/cinder-api-log/0.log" Nov 25 09:47:42 crc kubenswrapper[5043]: I1125 09:47:42.988949 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9afb63aa-ce0f-4365-a4cb-4fd593537095/cinder-api/0.log" Nov 25 09:47:43 crc kubenswrapper[5043]: I1125 09:47:43.168601 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_850ff79f-0c56-4cc9-be55-a76979fc1ac8/probe/0.log" Nov 25 09:47:43 crc kubenswrapper[5043]: I1125 09:47:43.244155 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c99eb43d-cf17-44a4-beeb-f5222c978039/cinder-scheduler/0.log" Nov 25 09:47:43 crc kubenswrapper[5043]: I1125 09:47:43.262748 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_850ff79f-0c56-4cc9-be55-a76979fc1ac8/cinder-backup/0.log" Nov 25 09:47:43 crc kubenswrapper[5043]: I1125 09:47:43.533212 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c99eb43d-cf17-44a4-beeb-f5222c978039/probe/0.log" Nov 25 09:47:43 crc kubenswrapper[5043]: I1125 09:47:43.534319 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_c15deb32-5994-4bf0-bd30-1a309d58f82c/cinder-volume/0.log" Nov 25 09:47:43 crc kubenswrapper[5043]: I1125 09:47:43.579475 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_c15deb32-5994-4bf0-bd30-1a309d58f82c/probe/0.log" Nov 25 09:47:43 crc kubenswrapper[5043]: I1125 09:47:43.728722 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-zbsz8_5c2779ae-e706-494e-9b9a-155774a61d31/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:47:43 crc kubenswrapper[5043]: I1125 09:47:43.787577 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-v49d9_067ff64a-f49c-4ca8-8c50-f49e2886a445/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:47:43 crc kubenswrapper[5043]: I1125 09:47:43.933921 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78f48d6b7c-v7lfn_91febcbe-4fc7-4b44-b7e9-d5258e9216b5/init/0.log" Nov 25 09:47:44 crc kubenswrapper[5043]: I1125 09:47:44.615754 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78f48d6b7c-v7lfn_91febcbe-4fc7-4b44-b7e9-d5258e9216b5/init/0.log" Nov 25 09:47:44 crc kubenswrapper[5043]: I1125 09:47:44.826355 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c/glance-httpd/0.log" Nov 25 09:47:44 crc kubenswrapper[5043]: I1125 09:47:44.849917 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b33481aa-e8bf-49c7-94a5-e7e1d9f5e68c/glance-log/0.log" Nov 25 09:47:44 crc kubenswrapper[5043]: I1125 09:47:44.943474 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78f48d6b7c-v7lfn_91febcbe-4fc7-4b44-b7e9-d5258e9216b5/dnsmasq-dns/0.log" Nov 25 09:47:45 crc kubenswrapper[5043]: I1125 09:47:45.063380 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_de0f9823-3037-49ba-8bbe-7384b6988f53/glance-log/0.log" Nov 25 09:47:45 crc kubenswrapper[5043]: I1125 09:47:45.073551 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_de0f9823-3037-49ba-8bbe-7384b6988f53/glance-httpd/0.log" Nov 25 09:47:45 crc kubenswrapper[5043]: I1125 09:47:45.212755 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5f67c4b5d4-f96jj_13e8a8ee-bfe8-415b-b76f-89d7d7296659/horizon/0.log" Nov 25 09:47:45 crc kubenswrapper[5043]: I1125 09:47:45.311822 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizontest-tests-horizontest_452efbe7-7e6a-4e2a-8a22-1dfa69176628/horizontest-tests-horizontest/0.log" Nov 25 09:47:45 crc kubenswrapper[5043]: I1125 09:47:45.459969 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-vwp8t_fc8c648b-1e0d-4b4b-b2f2-96e64441de99/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:47:45 crc kubenswrapper[5043]: I1125 09:47:45.589260 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-zp24v_218f287e-331e-49cc-8099-2791fb43a2ac/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:47:45 crc kubenswrapper[5043]: I1125 09:47:45.872230 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29400961-6qx6s_e8f9ccd0-dda3-4d6b-8d31-7ecf76c5dfb4/keystone-cron/0.log" Nov 25 09:47:46 crc kubenswrapper[5043]: I1125 09:47:46.101022 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401021-96f9v_ea3e940a-0a27-41f2-8629-e54ee65e138d/keystone-cron/0.log" Nov 25 09:47:46 crc kubenswrapper[5043]: I1125 09:47:46.234056 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c937dff6-4203-455c-b07a-ec16e23c746f/kube-state-metrics/3.log" Nov 25 09:47:46 crc kubenswrapper[5043]: I1125 09:47:46.328750 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c937dff6-4203-455c-b07a-ec16e23c746f/kube-state-metrics/2.log" Nov 25 09:47:46 crc kubenswrapper[5043]: I1125 09:47:46.579479 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-pq9wl_e7416361-3a03-4892-9a17-36934133905d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:47:46 crc kubenswrapper[5043]: I1125 09:47:46.817902 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_40b1194f-e610-4ee5-a970-281ea03cde81/manila-api-log/0.log" Nov 25 09:47:46 crc kubenswrapper[5043]: I1125 09:47:46.923450 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5f67c4b5d4-f96jj_13e8a8ee-bfe8-415b-b76f-89d7d7296659/horizon-log/0.log" Nov 25 09:47:47 crc kubenswrapper[5043]: I1125 09:47:47.032163 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_40b1194f-e610-4ee5-a970-281ea03cde81/manila-api/0.log" Nov 25 09:47:47 crc kubenswrapper[5043]: I1125 09:47:47.102211 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_adca59a7-f49f-443d-9201-bf7951585f6e/probe/0.log" Nov 25 09:47:47 crc kubenswrapper[5043]: I1125 09:47:47.230363 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_adca59a7-f49f-443d-9201-bf7951585f6e/manila-scheduler/0.log" Nov 25 09:47:47 crc kubenswrapper[5043]: I1125 09:47:47.278739 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:47:47 crc kubenswrapper[5043]: I1125 09:47:47.278791 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:47:47 crc kubenswrapper[5043]: I1125 09:47:47.344645 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_fd0c6a41-555c-4d25-8550-8cac7501125f/probe/0.log" Nov 25 09:47:47 crc kubenswrapper[5043]: I1125 09:47:47.354801 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_fd0c6a41-555c-4d25-8550-8cac7501125f/manila-share/0.log" Nov 25 09:47:48 crc kubenswrapper[5043]: I1125 09:47:48.078771 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fj2vz_3071ef74-1c72-4b4c-90e7-fee9dc8332e5/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:47:48 crc kubenswrapper[5043]: I1125 09:47:48.682592 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-586d64c99c-q5jk2_6e087724-2bb8-47c4-9687-cd1e82fb5a1f/neutron-httpd/0.log" Nov 25 09:47:49 crc kubenswrapper[5043]: I1125 09:47:49.164774 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-79d9bc7db7-xzxqf_e8934111-2c35-4f5a-8b87-182b3fe54fdb/keystone-api/0.log" Nov 25 09:47:49 crc kubenswrapper[5043]: I1125 09:47:49.558862 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-586d64c99c-q5jk2_6e087724-2bb8-47c4-9687-cd1e82fb5a1f/neutron-api/0.log" Nov 25 09:47:50 crc kubenswrapper[5043]: I1125 09:47:50.276530 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_af0b6cee-dd8f-48ce-9b2b-bbc163d66f2a/nova-cell1-conductor-conductor/0.log" Nov 25 09:47:50 crc kubenswrapper[5043]: I1125 09:47:50.380085 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_edfa7421-823f-4292-a033-8227024b3a40/nova-cell0-conductor-conductor/0.log" Nov 25 09:47:50 crc kubenswrapper[5043]: I1125 09:47:50.889382 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_13b17d1b-5e8d-4b80-a15c-be8d4458cf6f/nova-cell1-novncproxy-novncproxy/0.log" Nov 25 09:47:51 crc kubenswrapper[5043]: I1125 09:47:51.085290 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xrgch_ae4f7c1e-e419-4f75-aa7f-27b8c1ca7ada/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:47:51 crc kubenswrapper[5043]: I1125 09:47:51.565620 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_990680d0-bb9d-44b9-a67a-2af274498f7c/nova-metadata-log/0.log" Nov 25 09:47:52 crc kubenswrapper[5043]: I1125 09:47:52.735753 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e0bd148f-caab-423f-88d5-45392e63775d/nova-api-log/0.log" Nov 25 09:47:52 crc kubenswrapper[5043]: I1125 09:47:52.826720 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_be59e894-a929-4498-bee2-cf852ca1ae67/nova-scheduler-scheduler/0.log" Nov 25 09:47:53 crc kubenswrapper[5043]: I1125 09:47:53.059813 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a22a0679-f2ea-46b8-88f5-d010717699d1/mysql-bootstrap/0.log" Nov 25 09:47:53 crc kubenswrapper[5043]: I1125 09:47:53.272825 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a22a0679-f2ea-46b8-88f5-d010717699d1/mysql-bootstrap/0.log" Nov 25 09:47:53 crc kubenswrapper[5043]: I1125 09:47:53.325034 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_a22a0679-f2ea-46b8-88f5-d010717699d1/galera/0.log" Nov 25 09:47:53 crc kubenswrapper[5043]: I1125 09:47:53.715266 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_961b9ca6-9248-485e-9361-1e9bc78e9058/mysql-bootstrap/0.log" Nov 25 09:47:53 crc kubenswrapper[5043]: I1125 09:47:53.941736 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_961b9ca6-9248-485e-9361-1e9bc78e9058/galera/0.log" Nov 25 09:47:53 crc kubenswrapper[5043]: I1125 09:47:53.959751 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_961b9ca6-9248-485e-9361-1e9bc78e9058/mysql-bootstrap/0.log" Nov 25 09:47:54 crc kubenswrapper[5043]: I1125 09:47:54.145869 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_9bcf9848-5bdf-4760-829f-a92e4015ab70/openstackclient/0.log" Nov 25 09:47:54 crc kubenswrapper[5043]: I1125 09:47:54.192448 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e0bd148f-caab-423f-88d5-45392e63775d/nova-api-api/0.log" Nov 25 09:47:54 crc kubenswrapper[5043]: I1125 09:47:54.394260 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-p45r4_b2e5aec9-7403-47d2-ad0d-40765246ed38/openstack-network-exporter/0.log" Nov 25 09:47:54 crc kubenswrapper[5043]: I1125 09:47:54.635024 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s57wr_8cb5a8c6-ad9b-4b36-8766-e67dd27797f7/ovsdb-server-init/0.log" Nov 25 09:47:54 crc kubenswrapper[5043]: I1125 09:47:54.826064 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s57wr_8cb5a8c6-ad9b-4b36-8766-e67dd27797f7/ovs-vswitchd/0.log" Nov 25 09:47:54 crc kubenswrapper[5043]: I1125 09:47:54.839383 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s57wr_8cb5a8c6-ad9b-4b36-8766-e67dd27797f7/ovsdb-server-init/0.log" Nov 25 09:47:54 crc kubenswrapper[5043]: I1125 09:47:54.852024 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-s57wr_8cb5a8c6-ad9b-4b36-8766-e67dd27797f7/ovsdb-server/0.log" Nov 25 09:47:55 crc kubenswrapper[5043]: I1125 09:47:55.086954 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-pvbbc_4bd061b9-bc56-4e7f-b7eb-d12486d15712/ovn-controller/0.log" Nov 25 09:47:55 crc kubenswrapper[5043]: I1125 09:47:55.346321 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8a6dd4c5-d75f-4622-ba5e-1da7bfebca23/openstack-network-exporter/0.log" Nov 25 09:47:55 crc kubenswrapper[5043]: I1125 09:47:55.394664 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ds447_cb7b0a12-26a0-4b9f-8da0-0b95c5736e3e/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:47:55 crc kubenswrapper[5043]: I1125 09:47:55.538055 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8a6dd4c5-d75f-4622-ba5e-1da7bfebca23/ovn-northd/0.log" Nov 25 09:47:55 crc kubenswrapper[5043]: I1125 09:47:55.622368 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_568f6d22-7338-4a78-83ac-79125bd64fb9/openstack-network-exporter/0.log" Nov 25 09:47:55 crc kubenswrapper[5043]: I1125 09:47:55.785051 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_568f6d22-7338-4a78-83ac-79125bd64fb9/ovsdbserver-nb/0.log" Nov 25 09:47:55 crc kubenswrapper[5043]: I1125 09:47:55.806918 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_610ebd16-bde0-4b4b-acf4-6d15e0324fd6/openstack-network-exporter/0.log" Nov 25 09:47:56 crc kubenswrapper[5043]: I1125 09:47:56.075696 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_610ebd16-bde0-4b4b-acf4-6d15e0324fd6/ovsdbserver-sb/0.log" Nov 25 09:47:56 crc kubenswrapper[5043]: I1125 09:47:56.458414 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_990680d0-bb9d-44b9-a67a-2af274498f7c/nova-metadata-metadata/0.log" Nov 25 09:47:56 crc kubenswrapper[5043]: I1125 09:47:56.640160 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_96b0381f-3d56-49b8-8a21-0b8c1bd593c2/setup-container/0.log" Nov 25 09:47:56 crc kubenswrapper[5043]: I1125 09:47:56.746242 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-78c76fd9c4-8nvkz_8ae8142b-9631-41b8-94ea-cad294cf0fbf/placement-api/0.log" Nov 25 09:47:56 crc kubenswrapper[5043]: I1125 09:47:56.862308 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_96b0381f-3d56-49b8-8a21-0b8c1bd593c2/setup-container/0.log" Nov 25 09:47:56 crc kubenswrapper[5043]: I1125 09:47:56.929087 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_96b0381f-3d56-49b8-8a21-0b8c1bd593c2/rabbitmq/0.log" Nov 25 09:47:57 crc kubenswrapper[5043]: I1125 09:47:57.116368 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-78c76fd9c4-8nvkz_8ae8142b-9631-41b8-94ea-cad294cf0fbf/placement-log/0.log" Nov 25 09:47:57 crc kubenswrapper[5043]: I1125 09:47:57.127037 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_edfb7fa8-5582-4faa-9cb2-fbdfffa12d18/setup-container/0.log" Nov 25 09:47:57 crc kubenswrapper[5043]: I1125 09:47:57.352164 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_edfb7fa8-5582-4faa-9cb2-fbdfffa12d18/setup-container/0.log" Nov 25 09:47:57 crc kubenswrapper[5043]: I1125 09:47:57.360826 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_edfb7fa8-5582-4faa-9cb2-fbdfffa12d18/rabbitmq/0.log" Nov 25 09:47:57 crc kubenswrapper[5043]: I1125 09:47:57.388052 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-44vcq_dd890a18-8d10-41bb-bd31-5e10dc9c3752/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:47:57 crc kubenswrapper[5043]: I1125 09:47:57.649046 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fqcs5_ff6ea425-8f08-4513-a444-ff524369c066/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:47:57 crc kubenswrapper[5043]: I1125 09:47:57.770668 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-kd7gq_5552c355-eb5a-4242-b79d-f9e1962c31f1/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:47:57 crc kubenswrapper[5043]: I1125 09:47:57.932752 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-khlgq_63717215-03b7-4e3c-9224-004f5e3b8cfe/ssh-known-hosts-edpm-deployment/0.log" Nov 25 09:47:58 crc kubenswrapper[5043]: I1125 09:47:58.201119 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-test_78329cf6-223a-4efb-9b86-1bc180f80cb1/tempest-tests-tempest-tests-runner/0.log" Nov 25 09:47:58 crc kubenswrapper[5043]: I1125 09:47:58.205308 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-full_6515f5fe-fd1f-4786-8374-8af7b394831b/tempest-tests-tempest-tests-runner/0.log" Nov 25 09:47:58 crc kubenswrapper[5043]: I1125 09:47:58.445319 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-ansibletest-ansibletest-ansibletest_028859ac-e6df-4f39-bd2c-8b884c7c378e/test-operator-logs-container/0.log" Nov 25 09:47:58 crc kubenswrapper[5043]: I1125 09:47:58.458368 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-horizontest-horizontest-tests-horizontest_04e243c7-381a-4239-b3db-881eed1db744/test-operator-logs-container/0.log" Nov 25 09:47:58 crc kubenswrapper[5043]: I1125 09:47:58.675510 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_64c84afc-13c0-4c4e-a82d-e3c9f7014388/test-operator-logs-container/0.log" Nov 25 09:47:58 crc kubenswrapper[5043]: I1125 09:47:58.814746 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tobiko-tobiko-tests-tobiko_9a0d0afc-d78f-4156-8d03-e50b825c0cd0/test-operator-logs-container/0.log" Nov 25 09:47:59 crc kubenswrapper[5043]: I1125 09:47:59.065342 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s00-podified-functional_13b66c56-5aa0-42ff-a574-26ec881f2e64/tobiko-tests-tobiko/0.log" Nov 25 09:47:59 crc kubenswrapper[5043]: I1125 09:47:59.196197 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s01-sanity_d53359ac-2451-479a-bd73-bec83fc39a47/tobiko-tests-tobiko/0.log" Nov 25 09:47:59 crc kubenswrapper[5043]: I1125 09:47:59.309205 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-mwjk5_f22dd652-56fe-432d-a66f-806586c1c352/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 09:48:03 crc kubenswrapper[5043]: I1125 09:48:03.831149 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3077d275-063c-4a4d-97bf-b1b006e32f6f/memcached/0.log" Nov 25 09:48:17 crc kubenswrapper[5043]: I1125 09:48:17.276637 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:48:17 crc kubenswrapper[5043]: I1125 09:48:17.277226 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:48:23 crc kubenswrapper[5043]: I1125 09:48:23.706342 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-dtcj4_d9a368e6-f4bb-4896-9a2d-f7ceed65e933/manager/3.log" Nov 25 09:48:23 crc kubenswrapper[5043]: I1125 09:48:23.707503 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-dtcj4_d9a368e6-f4bb-4896-9a2d-f7ceed65e933/kube-rbac-proxy/0.log" Nov 25 09:48:23 crc kubenswrapper[5043]: I1125 09:48:23.823980 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-dtcj4_d9a368e6-f4bb-4896-9a2d-f7ceed65e933/manager/2.log" Nov 25 09:48:23 crc kubenswrapper[5043]: I1125 09:48:23.904022 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb_ee5be134-f74b-42f1-b99e-7ec2690c99c4/util/0.log" Nov 25 09:48:24 crc kubenswrapper[5043]: I1125 09:48:24.121280 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb_ee5be134-f74b-42f1-b99e-7ec2690c99c4/pull/0.log" Nov 25 09:48:24 crc kubenswrapper[5043]: I1125 09:48:24.122134 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb_ee5be134-f74b-42f1-b99e-7ec2690c99c4/util/0.log" Nov 25 09:48:24 crc kubenswrapper[5043]: I1125 09:48:24.161548 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb_ee5be134-f74b-42f1-b99e-7ec2690c99c4/pull/0.log" Nov 25 09:48:24 crc kubenswrapper[5043]: I1125 09:48:24.323270 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb_ee5be134-f74b-42f1-b99e-7ec2690c99c4/pull/0.log" Nov 25 09:48:24 crc kubenswrapper[5043]: I1125 09:48:24.326810 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb_ee5be134-f74b-42f1-b99e-7ec2690c99c4/util/0.log" Nov 25 09:48:24 crc kubenswrapper[5043]: I1125 09:48:24.333794 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f44nlb_ee5be134-f74b-42f1-b99e-7ec2690c99c4/extract/0.log" Nov 25 09:48:24 crc kubenswrapper[5043]: I1125 09:48:24.519625 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-pnq4k_cdc9a1bf-b6d9-4a36-bcf8-55f87525da45/manager/3.log" Nov 25 09:48:24 crc kubenswrapper[5043]: I1125 09:48:24.543865 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-pnq4k_cdc9a1bf-b6d9-4a36-bcf8-55f87525da45/kube-rbac-proxy/0.log" Nov 25 09:48:24 crc kubenswrapper[5043]: I1125 09:48:24.546008 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-pnq4k_cdc9a1bf-b6d9-4a36-bcf8-55f87525da45/manager/2.log" Nov 25 09:48:24 crc kubenswrapper[5043]: I1125 09:48:24.718813 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-5mp5h_e020a857-3730-44f5-8e98-3e59868fbde6/kube-rbac-proxy/0.log" Nov 25 09:48:24 crc kubenswrapper[5043]: I1125 09:48:24.721080 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-5mp5h_e020a857-3730-44f5-8e98-3e59868fbde6/manager/3.log" Nov 25 09:48:24 crc kubenswrapper[5043]: I1125 09:48:24.757867 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-5mp5h_e020a857-3730-44f5-8e98-3e59868fbde6/manager/2.log" Nov 25 09:48:24 crc kubenswrapper[5043]: I1125 09:48:24.937672 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-nnpzz_e5c62587-28b4-4a1e-8b73-ee9624ca7163/manager/3.log" Nov 25 09:48:24 crc kubenswrapper[5043]: I1125 09:48:24.958794 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-nnpzz_e5c62587-28b4-4a1e-8b73-ee9624ca7163/kube-rbac-proxy/0.log" Nov 25 09:48:25 crc kubenswrapper[5043]: I1125 09:48:25.029989 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-nnpzz_e5c62587-28b4-4a1e-8b73-ee9624ca7163/manager/2.log" Nov 25 09:48:25 crc kubenswrapper[5043]: I1125 09:48:25.136484 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-l77gb_8a93d5b1-742c-4a37-94ef-a60ffb008520/manager/3.log" Nov 25 09:48:25 crc kubenswrapper[5043]: I1125 09:48:25.166285 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-l77gb_8a93d5b1-742c-4a37-94ef-a60ffb008520/kube-rbac-proxy/0.log" Nov 25 09:48:25 crc kubenswrapper[5043]: I1125 09:48:25.194286 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-l77gb_8a93d5b1-742c-4a37-94ef-a60ffb008520/manager/2.log" Nov 25 09:48:25 crc kubenswrapper[5043]: I1125 09:48:25.357122 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-wmkmw_c20803a7-e9a9-441a-9e61-84673f3c02e8/manager/3.log" Nov 25 09:48:25 crc kubenswrapper[5043]: I1125 09:48:25.362309 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-wmkmw_c20803a7-e9a9-441a-9e61-84673f3c02e8/kube-rbac-proxy/0.log" Nov 25 09:48:25 crc kubenswrapper[5043]: I1125 09:48:25.395342 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-wmkmw_c20803a7-e9a9-441a-9e61-84673f3c02e8/manager/2.log" Nov 25 09:48:25 crc kubenswrapper[5043]: I1125 09:48:25.548221 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-x8q8x_92e57762-522f-4a9d-8b03-732ba4dad5c1/kube-rbac-proxy/0.log" Nov 25 09:48:25 crc kubenswrapper[5043]: I1125 09:48:25.563394 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-x8q8x_92e57762-522f-4a9d-8b03-732ba4dad5c1/manager/2.log" Nov 25 09:48:25 crc kubenswrapper[5043]: I1125 09:48:25.645339 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-x8q8x_92e57762-522f-4a9d-8b03-732ba4dad5c1/manager/1.log" Nov 25 09:48:25 crc kubenswrapper[5043]: I1125 09:48:25.821724 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-sgz96_b7005e58-64d2-470b-a3e7-22b67b7fbfb3/manager/3.log" Nov 25 09:48:25 crc kubenswrapper[5043]: I1125 09:48:25.867245 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-sgz96_b7005e58-64d2-470b-a3e7-22b67b7fbfb3/manager/2.log" Nov 25 09:48:25 crc kubenswrapper[5043]: I1125 09:48:25.870077 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-sgz96_b7005e58-64d2-470b-a3e7-22b67b7fbfb3/kube-rbac-proxy/0.log" Nov 25 09:48:25 crc kubenswrapper[5043]: E1125 09:48:25.962970 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:48:26 crc kubenswrapper[5043]: I1125 09:48:26.017500 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-gvwj8_ff874d31-8e5a-4c0b-8f9c-e63513a00483/kube-rbac-proxy/0.log" Nov 25 09:48:26 crc kubenswrapper[5043]: I1125 09:48:26.135612 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-gvwj8_ff874d31-8e5a-4c0b-8f9c-e63513a00483/manager/2.log" Nov 25 09:48:26 crc kubenswrapper[5043]: I1125 09:48:26.137896 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-gvwj8_ff874d31-8e5a-4c0b-8f9c-e63513a00483/manager/3.log" Nov 25 09:48:26 crc kubenswrapper[5043]: I1125 09:48:26.233171 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-xx8rb_c924fa47-53fb-4edc-8214-667ba1858ca2/kube-rbac-proxy/0.log" Nov 25 09:48:26 crc kubenswrapper[5043]: I1125 09:48:26.372200 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-xx8rb_c924fa47-53fb-4edc-8214-667ba1858ca2/manager/3.log" Nov 25 09:48:26 crc kubenswrapper[5043]: I1125 09:48:26.375791 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-xx8rb_c924fa47-53fb-4edc-8214-667ba1858ca2/manager/2.log" Nov 25 09:48:26 crc kubenswrapper[5043]: I1125 09:48:26.480457 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-tdzr2_9c9e4471-0205-478a-8717-be36a19d2a02/kube-rbac-proxy/0.log" Nov 25 09:48:26 crc kubenswrapper[5043]: I1125 09:48:26.557850 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-tdzr2_9c9e4471-0205-478a-8717-be36a19d2a02/manager/2.log" Nov 25 09:48:26 crc kubenswrapper[5043]: I1125 09:48:26.663559 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-tdzr2_9c9e4471-0205-478a-8717-be36a19d2a02/manager/1.log" Nov 25 09:48:26 crc kubenswrapper[5043]: I1125 09:48:26.700893 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-l5vz2_bb800a2f-1864-47be-931b-7b99f7c7354f/kube-rbac-proxy/0.log" Nov 25 09:48:26 crc kubenswrapper[5043]: I1125 09:48:26.873756 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-l5vz2_bb800a2f-1864-47be-931b-7b99f7c7354f/manager/2.log" Nov 25 09:48:26 crc kubenswrapper[5043]: I1125 09:48:26.977361 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-l5vz2_bb800a2f-1864-47be-931b-7b99f7c7354f/manager/1.log" Nov 25 09:48:27 crc kubenswrapper[5043]: I1125 09:48:27.015053 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-m9bmz_a3d7b5dc-2ced-4ac6-bdad-cd86342616a8/kube-rbac-proxy/0.log" Nov 25 09:48:27 crc kubenswrapper[5043]: I1125 09:48:27.125418 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-m9bmz_a3d7b5dc-2ced-4ac6-bdad-cd86342616a8/manager/2.log" Nov 25 09:48:27 crc kubenswrapper[5043]: I1125 09:48:27.217922 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-m9bmz_a3d7b5dc-2ced-4ac6-bdad-cd86342616a8/manager/1.log" Nov 25 09:48:27 crc kubenswrapper[5043]: I1125 09:48:27.258163 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-dxd2x_020c7247-0b68-419b-b97f-f7b0ea800142/kube-rbac-proxy/0.log" Nov 25 09:48:27 crc kubenswrapper[5043]: I1125 09:48:27.392382 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-dxd2x_020c7247-0b68-419b-b97f-f7b0ea800142/manager/2.log" Nov 25 09:48:27 crc kubenswrapper[5043]: I1125 09:48:27.427713 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-dxd2x_020c7247-0b68-419b-b97f-f7b0ea800142/manager/1.log" Nov 25 09:48:27 crc kubenswrapper[5043]: I1125 09:48:27.498207 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-b58f89467-h9jgk_c0627b3a-26de-453c-ab7f-de79dae6c2fc/kube-rbac-proxy/0.log" Nov 25 09:48:27 crc kubenswrapper[5043]: I1125 09:48:27.573047 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-b58f89467-h9jgk_c0627b3a-26de-453c-ab7f-de79dae6c2fc/manager/1.log" Nov 25 09:48:27 crc kubenswrapper[5043]: I1125 09:48:27.623578 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-b58f89467-h9jgk_c0627b3a-26de-453c-ab7f-de79dae6c2fc/manager/0.log" Nov 25 09:48:27 crc kubenswrapper[5043]: I1125 09:48:27.709756 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cd5954d9-5zklz_f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4/manager/2.log" Nov 25 09:48:27 crc kubenswrapper[5043]: I1125 09:48:27.727583 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cd5954d9-5zklz_f2ad9d27-1ade-4508-a7a3-cdaa21b9aae4/manager/1.log" Nov 25 09:48:27 crc kubenswrapper[5043]: I1125 09:48:27.865271 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-54c548f75b-mk6ml_d845a43d-ee06-454f-b68d-cdb949cecffe/operator/1.log" Nov 25 09:48:27 crc kubenswrapper[5043]: I1125 09:48:27.882895 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-54c548f75b-mk6ml_d845a43d-ee06-454f-b68d-cdb949cecffe/operator/0.log" Nov 25 09:48:28 crc kubenswrapper[5043]: I1125 09:48:28.040460 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-vl25g_d14cb4f9-dc65-4999-833a-475d3f735715/registry-server/0.log" Nov 25 09:48:28 crc kubenswrapper[5043]: I1125 09:48:28.072856 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-d5ffq_d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2/kube-rbac-proxy/0.log" Nov 25 09:48:28 crc kubenswrapper[5043]: I1125 09:48:28.117391 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-d5ffq_d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2/manager/2.log" Nov 25 09:48:28 crc kubenswrapper[5043]: I1125 09:48:28.174209 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-d5ffq_d4ff23e0-c2f3-4185-a7aa-df0f7e3596d2/manager/1.log" Nov 25 09:48:28 crc kubenswrapper[5043]: I1125 09:48:28.260889 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-6w2db_869f93a1-d6e7-46ff-a60f-0e997412a2fa/kube-rbac-proxy/0.log" Nov 25 09:48:28 crc kubenswrapper[5043]: I1125 09:48:28.322900 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-6w2db_869f93a1-d6e7-46ff-a60f-0e997412a2fa/manager/2.log" Nov 25 09:48:28 crc kubenswrapper[5043]: I1125 09:48:28.342008 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-6w2db_869f93a1-d6e7-46ff-a60f-0e997412a2fa/manager/1.log" Nov 25 09:48:28 crc kubenswrapper[5043]: I1125 09:48:28.437626 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fmplr_6411a018-19de-4fba-bf72-6dfd5bd2ce29/operator/3.log" Nov 25 09:48:28 crc kubenswrapper[5043]: I1125 09:48:28.484482 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fmplr_6411a018-19de-4fba-bf72-6dfd5bd2ce29/operator/2.log" Nov 25 09:48:28 crc kubenswrapper[5043]: I1125 09:48:28.566182 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-gmcsx_8ea2c827-762f-437d-ad30-a3568d7a4af1/kube-rbac-proxy/0.log" Nov 25 09:48:28 crc kubenswrapper[5043]: I1125 09:48:28.586799 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-gmcsx_8ea2c827-762f-437d-ad30-a3568d7a4af1/manager/2.log" Nov 25 09:48:28 crc kubenswrapper[5043]: I1125 09:48:28.652547 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-gmcsx_8ea2c827-762f-437d-ad30-a3568d7a4af1/manager/1.log" Nov 25 09:48:28 crc kubenswrapper[5043]: I1125 09:48:28.796922 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-mk7wm_d643e47d-246d-4551-a63c-9b9374e684b2/kube-rbac-proxy/0.log" Nov 25 09:48:28 crc kubenswrapper[5043]: I1125 09:48:28.813239 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-mk7wm_d643e47d-246d-4551-a63c-9b9374e684b2/manager/1.log" Nov 25 09:48:28 crc kubenswrapper[5043]: I1125 09:48:28.862239 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-mk7wm_d643e47d-246d-4551-a63c-9b9374e684b2/manager/2.log" Nov 25 09:48:28 crc kubenswrapper[5043]: I1125 09:48:28.951896 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-556c5c9c9c-82qgw_8cfc66d8-27da-4bce-9a5f-62a019bfd836/kube-rbac-proxy/0.log" Nov 25 09:48:29 crc kubenswrapper[5043]: I1125 09:48:29.051393 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-556c5c9c9c-82qgw_8cfc66d8-27da-4bce-9a5f-62a019bfd836/manager/0.log" Nov 25 09:48:29 crc kubenswrapper[5043]: I1125 09:48:29.079926 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-556c5c9c9c-82qgw_8cfc66d8-27da-4bce-9a5f-62a019bfd836/manager/1.log" Nov 25 09:48:29 crc kubenswrapper[5043]: I1125 09:48:29.172874 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-54g5x_17e00d26-c8ad-4dfd-90df-8705b2cb2bde/kube-rbac-proxy/0.log" Nov 25 09:48:29 crc kubenswrapper[5043]: I1125 09:48:29.341237 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-54g5x_17e00d26-c8ad-4dfd-90df-8705b2cb2bde/manager/2.log" Nov 25 09:48:29 crc kubenswrapper[5043]: I1125 09:48:29.380699 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-54g5x_17e00d26-c8ad-4dfd-90df-8705b2cb2bde/manager/1.log" Nov 25 09:48:47 crc kubenswrapper[5043]: I1125 09:48:47.277326 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:48:47 crc kubenswrapper[5043]: I1125 09:48:47.277850 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:48:47 crc kubenswrapper[5043]: I1125 09:48:47.277905 5043 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" Nov 25 09:48:47 crc kubenswrapper[5043]: I1125 09:48:47.278735 5043 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70"} pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 09:48:47 crc kubenswrapper[5043]: I1125 09:48:47.278796 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" containerID="cri-o://03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" gracePeriod=600 Nov 25 09:48:47 crc kubenswrapper[5043]: E1125 09:48:47.442699 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:48:48 crc kubenswrapper[5043]: I1125 09:48:48.322772 5043 generic.go:334] "Generic (PLEG): container finished" podID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" exitCode=0 Nov 25 09:48:48 crc kubenswrapper[5043]: I1125 09:48:48.322820 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerDied","Data":"03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70"} Nov 25 09:48:48 crc kubenswrapper[5043]: I1125 09:48:48.322854 5043 scope.go:117] "RemoveContainer" containerID="a130716f04cf2be78c4090558741c401be0edb499b64015e99218420408c0fd5" Nov 25 09:48:48 crc kubenswrapper[5043]: I1125 09:48:48.323506 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:48:48 crc kubenswrapper[5043]: E1125 09:48:48.323896 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:48:49 crc kubenswrapper[5043]: I1125 09:48:49.186852 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-gnr9c_f110819e-9e33-4cf3-85b0-b92eaaaa223b/control-plane-machine-set-operator/0.log" Nov 25 09:48:49 crc kubenswrapper[5043]: I1125 09:48:49.345171 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-l89zw_d00aa552-e700-4bec-9818-3084ac601a92/kube-rbac-proxy/0.log" Nov 25 09:48:49 crc kubenswrapper[5043]: I1125 09:48:49.434482 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-l89zw_d00aa552-e700-4bec-9818-3084ac601a92/machine-api-operator/0.log" Nov 25 09:49:00 crc kubenswrapper[5043]: I1125 09:49:00.963396 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:49:00 crc kubenswrapper[5043]: E1125 09:49:00.964282 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:49:03 crc kubenswrapper[5043]: I1125 09:49:03.567017 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-42sd8_00a5ef16-fb0d-4b68-b3aa-92411430aebd/cert-manager-controller/0.log" Nov 25 09:49:03 crc kubenswrapper[5043]: I1125 09:49:03.721747 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-9lxcm_94518a18-995b-490b-8099-917d5e510ad0/cert-manager-cainjector/0.log" Nov 25 09:49:03 crc kubenswrapper[5043]: I1125 09:49:03.723070 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-9lxcm_94518a18-995b-490b-8099-917d5e510ad0/cert-manager-cainjector/1.log" Nov 25 09:49:03 crc kubenswrapper[5043]: I1125 09:49:03.764229 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-qn5hq_6c845b4b-10ec-41bc-8482-14da0da21a03/cert-manager-webhook/0.log" Nov 25 09:49:11 crc kubenswrapper[5043]: I1125 09:49:11.963832 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:49:11 crc kubenswrapper[5043]: E1125 09:49:11.964735 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:49:15 crc kubenswrapper[5043]: I1125 09:49:15.741063 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-8rr2k_484bc3f6-cd90-415a-99d9-0496929f73f7/nmstate-console-plugin/0.log" Nov 25 09:49:15 crc kubenswrapper[5043]: I1125 09:49:15.904226 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ddx45_eca9619f-360b-466d-9413-cce43ac0e5de/nmstate-handler/0.log" Nov 25 09:49:15 crc kubenswrapper[5043]: I1125 09:49:15.932615 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-jcbrg_9cece50c-ecd0-4349-8c5f-26d814c988c0/kube-rbac-proxy/0.log" Nov 25 09:49:15 crc kubenswrapper[5043]: I1125 09:49:15.983581 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-jcbrg_9cece50c-ecd0-4349-8c5f-26d814c988c0/nmstate-metrics/0.log" Nov 25 09:49:16 crc kubenswrapper[5043]: I1125 09:49:16.093558 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-q7kcp_c86d9095-02e1-450f-9d00-b448049035b1/nmstate-operator/0.log" Nov 25 09:49:16 crc kubenswrapper[5043]: I1125 09:49:16.234875 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-rk77g_cfab8bc8-7fd5-4a73-a58a-e92b3ad46845/nmstate-webhook/0.log" Nov 25 09:49:25 crc kubenswrapper[5043]: I1125 09:49:25.963146 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:49:25 crc kubenswrapper[5043]: E1125 09:49:25.964143 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:49:31 crc kubenswrapper[5043]: I1125 09:49:31.186633 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-fkqdz_e214977c-6456-4990-b061-b88f5a127836/kube-rbac-proxy/0.log" Nov 25 09:49:31 crc kubenswrapper[5043]: I1125 09:49:31.400748 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-fkqdz_e214977c-6456-4990-b061-b88f5a127836/controller/0.log" Nov 25 09:49:31 crc kubenswrapper[5043]: I1125 09:49:31.448584 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-frr-files/0.log" Nov 25 09:49:31 crc kubenswrapper[5043]: I1125 09:49:31.611877 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-reloader/0.log" Nov 25 09:49:31 crc kubenswrapper[5043]: I1125 09:49:31.631952 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-metrics/0.log" Nov 25 09:49:31 crc kubenswrapper[5043]: I1125 09:49:31.632063 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-frr-files/0.log" Nov 25 09:49:31 crc kubenswrapper[5043]: I1125 09:49:31.675636 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-reloader/0.log" Nov 25 09:49:31 crc kubenswrapper[5043]: I1125 09:49:31.890015 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-metrics/0.log" Nov 25 09:49:31 crc kubenswrapper[5043]: I1125 09:49:31.922553 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-frr-files/0.log" Nov 25 09:49:31 crc kubenswrapper[5043]: I1125 09:49:31.932083 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-metrics/0.log" Nov 25 09:49:31 crc kubenswrapper[5043]: I1125 09:49:31.938859 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-reloader/0.log" Nov 25 09:49:32 crc kubenswrapper[5043]: I1125 09:49:32.124940 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-reloader/0.log" Nov 25 09:49:32 crc kubenswrapper[5043]: I1125 09:49:32.126660 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-metrics/0.log" Nov 25 09:49:32 crc kubenswrapper[5043]: I1125 09:49:32.127219 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/cp-frr-files/0.log" Nov 25 09:49:32 crc kubenswrapper[5043]: I1125 09:49:32.173984 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/controller/0.log" Nov 25 09:49:32 crc kubenswrapper[5043]: I1125 09:49:32.365334 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/kube-rbac-proxy/0.log" Nov 25 09:49:32 crc kubenswrapper[5043]: I1125 09:49:32.366564 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/frr-metrics/0.log" Nov 25 09:49:32 crc kubenswrapper[5043]: I1125 09:49:32.424252 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/kube-rbac-proxy-frr/0.log" Nov 25 09:49:33 crc kubenswrapper[5043]: I1125 09:49:33.174755 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-4mwjl_4d08af94-ced7-41f2-a5da-4a5ab09436bb/frr-k8s-webhook-server/0.log" Nov 25 09:49:33 crc kubenswrapper[5043]: I1125 09:49:33.193963 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/reloader/0.log" Nov 25 09:49:33 crc kubenswrapper[5043]: I1125 09:49:33.417094 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-85bdd6cc97-lrkkr_cdbab2e0-494c-4845-a500-88b26934f1c7/manager/3.log" Nov 25 09:49:33 crc kubenswrapper[5043]: I1125 09:49:33.498190 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-85bdd6cc97-lrkkr_cdbab2e0-494c-4845-a500-88b26934f1c7/manager/2.log" Nov 25 09:49:33 crc kubenswrapper[5043]: I1125 09:49:33.625377 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-687d746769-dbszt_d592d149-d73b-4db0-a83f-81fdb776420a/webhook-server/0.log" Nov 25 09:49:33 crc kubenswrapper[5043]: I1125 09:49:33.872336 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8sqcm_f6fde8c1-7722-4081-ae09-6f0cf5af35c4/kube-rbac-proxy/0.log" Nov 25 09:49:34 crc kubenswrapper[5043]: I1125 09:49:34.606374 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8sqcm_f6fde8c1-7722-4081-ae09-6f0cf5af35c4/speaker/0.log" Nov 25 09:49:34 crc kubenswrapper[5043]: I1125 09:49:34.858507 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdt6k_781aa9bd-6e71-452c-8932-758f4c26cb40/frr/0.log" Nov 25 09:49:36 crc kubenswrapper[5043]: I1125 09:49:36.970476 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:49:36 crc kubenswrapper[5043]: E1125 09:49:36.971836 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:49:46 crc kubenswrapper[5043]: E1125 09:49:46.969834 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:49:47 crc kubenswrapper[5043]: I1125 09:49:47.211693 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v_51d2d2e9-ab00-458f-b284-965e99abbdb3/util/0.log" Nov 25 09:49:47 crc kubenswrapper[5043]: I1125 09:49:47.426935 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v_51d2d2e9-ab00-458f-b284-965e99abbdb3/util/0.log" Nov 25 09:49:47 crc kubenswrapper[5043]: I1125 09:49:47.463837 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v_51d2d2e9-ab00-458f-b284-965e99abbdb3/pull/0.log" Nov 25 09:49:47 crc kubenswrapper[5043]: I1125 09:49:47.506322 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v_51d2d2e9-ab00-458f-b284-965e99abbdb3/pull/0.log" Nov 25 09:49:47 crc kubenswrapper[5043]: I1125 09:49:47.720644 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v_51d2d2e9-ab00-458f-b284-965e99abbdb3/util/0.log" Nov 25 09:49:47 crc kubenswrapper[5043]: I1125 09:49:47.725196 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v_51d2d2e9-ab00-458f-b284-965e99abbdb3/extract/0.log" Nov 25 09:49:47 crc kubenswrapper[5043]: I1125 09:49:47.732494 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ekn52v_51d2d2e9-ab00-458f-b284-965e99abbdb3/pull/0.log" Nov 25 09:49:48 crc kubenswrapper[5043]: I1125 09:49:48.056427 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bn6tl_e0b8135e-a7a5-462c-9ba8-0d36b6778807/extract-utilities/0.log" Nov 25 09:49:48 crc kubenswrapper[5043]: I1125 09:49:48.243249 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bn6tl_e0b8135e-a7a5-462c-9ba8-0d36b6778807/extract-content/0.log" Nov 25 09:49:48 crc kubenswrapper[5043]: I1125 09:49:48.264365 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bn6tl_e0b8135e-a7a5-462c-9ba8-0d36b6778807/extract-utilities/0.log" Nov 25 09:49:48 crc kubenswrapper[5043]: I1125 09:49:48.280373 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bn6tl_e0b8135e-a7a5-462c-9ba8-0d36b6778807/extract-content/0.log" Nov 25 09:49:48 crc kubenswrapper[5043]: I1125 09:49:48.461070 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bn6tl_e0b8135e-a7a5-462c-9ba8-0d36b6778807/extract-content/0.log" Nov 25 09:49:48 crc kubenswrapper[5043]: I1125 09:49:48.494094 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bn6tl_e0b8135e-a7a5-462c-9ba8-0d36b6778807/extract-utilities/0.log" Nov 25 09:49:48 crc kubenswrapper[5043]: I1125 09:49:48.690913 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6sqfs_c5441050-a90f-49f4-89a4-c40076857f5e/extract-utilities/0.log" Nov 25 09:49:48 crc kubenswrapper[5043]: I1125 09:49:48.898825 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6sqfs_c5441050-a90f-49f4-89a4-c40076857f5e/extract-content/0.log" Nov 25 09:49:48 crc kubenswrapper[5043]: I1125 09:49:48.962504 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:49:48 crc kubenswrapper[5043]: E1125 09:49:48.962812 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:49:48 crc kubenswrapper[5043]: I1125 09:49:48.969093 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bn6tl_e0b8135e-a7a5-462c-9ba8-0d36b6778807/registry-server/0.log" Nov 25 09:49:49 crc kubenswrapper[5043]: I1125 09:49:49.009208 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6sqfs_c5441050-a90f-49f4-89a4-c40076857f5e/extract-content/0.log" Nov 25 09:49:49 crc kubenswrapper[5043]: I1125 09:49:49.020847 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6sqfs_c5441050-a90f-49f4-89a4-c40076857f5e/extract-utilities/0.log" Nov 25 09:49:49 crc kubenswrapper[5043]: I1125 09:49:49.174216 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6sqfs_c5441050-a90f-49f4-89a4-c40076857f5e/extract-content/0.log" Nov 25 09:49:49 crc kubenswrapper[5043]: I1125 09:49:49.190506 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6sqfs_c5441050-a90f-49f4-89a4-c40076857f5e/extract-utilities/0.log" Nov 25 09:49:49 crc kubenswrapper[5043]: I1125 09:49:49.406043 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6_75d56d2d-27c2-4a6d-9f9f-3975af3a6bed/util/0.log" Nov 25 09:49:49 crc kubenswrapper[5043]: I1125 09:49:49.720804 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6_75d56d2d-27c2-4a6d-9f9f-3975af3a6bed/pull/0.log" Nov 25 09:49:49 crc kubenswrapper[5043]: I1125 09:49:49.724046 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6_75d56d2d-27c2-4a6d-9f9f-3975af3a6bed/util/0.log" Nov 25 09:49:49 crc kubenswrapper[5043]: I1125 09:49:49.732003 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6_75d56d2d-27c2-4a6d-9f9f-3975af3a6bed/pull/0.log" Nov 25 09:49:49 crc kubenswrapper[5043]: I1125 09:49:49.886750 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6sqfs_c5441050-a90f-49f4-89a4-c40076857f5e/registry-server/0.log" Nov 25 09:49:49 crc kubenswrapper[5043]: I1125 09:49:49.933111 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6_75d56d2d-27c2-4a6d-9f9f-3975af3a6bed/extract/0.log" Nov 25 09:49:50 crc kubenswrapper[5043]: I1125 09:49:50.017493 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6_75d56d2d-27c2-4a6d-9f9f-3975af3a6bed/util/0.log" Nov 25 09:49:50 crc kubenswrapper[5043]: I1125 09:49:50.017886 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6ctlr6_75d56d2d-27c2-4a6d-9f9f-3975af3a6bed/pull/0.log" Nov 25 09:49:50 crc kubenswrapper[5043]: I1125 09:49:50.178899 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-86pjb_b3387cfc-ac92-4b17-b153-e30513638741/marketplace-operator/0.log" Nov 25 09:49:50 crc kubenswrapper[5043]: I1125 09:49:50.276845 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2t8s_72da6310-6558-476b-8cb4-32e7b6983b67/extract-utilities/0.log" Nov 25 09:49:50 crc kubenswrapper[5043]: I1125 09:49:50.474689 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2t8s_72da6310-6558-476b-8cb4-32e7b6983b67/extract-utilities/0.log" Nov 25 09:49:50 crc kubenswrapper[5043]: I1125 09:49:50.487394 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2t8s_72da6310-6558-476b-8cb4-32e7b6983b67/extract-content/0.log" Nov 25 09:49:50 crc kubenswrapper[5043]: I1125 09:49:50.490518 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2t8s_72da6310-6558-476b-8cb4-32e7b6983b67/extract-content/0.log" Nov 25 09:49:50 crc kubenswrapper[5043]: I1125 09:49:50.714328 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2t8s_72da6310-6558-476b-8cb4-32e7b6983b67/extract-content/0.log" Nov 25 09:49:50 crc kubenswrapper[5043]: I1125 09:49:50.738475 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2t8s_72da6310-6558-476b-8cb4-32e7b6983b67/extract-utilities/0.log" Nov 25 09:49:50 crc kubenswrapper[5043]: I1125 09:49:50.994213 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tm5dw_ff7e436c-9335-4145-8c68-31dd3da7d4ed/extract-utilities/0.log" Nov 25 09:49:51 crc kubenswrapper[5043]: I1125 09:49:51.003408 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2t8s_72da6310-6558-476b-8cb4-32e7b6983b67/registry-server/0.log" Nov 25 09:49:51 crc kubenswrapper[5043]: I1125 09:49:51.273186 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tm5dw_ff7e436c-9335-4145-8c68-31dd3da7d4ed/extract-content/0.log" Nov 25 09:49:51 crc kubenswrapper[5043]: I1125 09:49:51.288963 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tm5dw_ff7e436c-9335-4145-8c68-31dd3da7d4ed/extract-utilities/0.log" Nov 25 09:49:51 crc kubenswrapper[5043]: I1125 09:49:51.340318 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tm5dw_ff7e436c-9335-4145-8c68-31dd3da7d4ed/extract-content/0.log" Nov 25 09:49:51 crc kubenswrapper[5043]: I1125 09:49:51.526845 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tm5dw_ff7e436c-9335-4145-8c68-31dd3da7d4ed/extract-content/0.log" Nov 25 09:49:51 crc kubenswrapper[5043]: I1125 09:49:51.526873 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tm5dw_ff7e436c-9335-4145-8c68-31dd3da7d4ed/extract-utilities/0.log" Nov 25 09:49:52 crc kubenswrapper[5043]: I1125 09:49:52.585881 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-tm5dw_ff7e436c-9335-4145-8c68-31dd3da7d4ed/registry-server/0.log" Nov 25 09:50:02 crc kubenswrapper[5043]: I1125 09:50:02.962569 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:50:02 crc kubenswrapper[5043]: E1125 09:50:02.963459 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:50:14 crc kubenswrapper[5043]: I1125 09:50:14.964212 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:50:14 crc kubenswrapper[5043]: E1125 09:50:14.964995 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:50:28 crc kubenswrapper[5043]: I1125 09:50:28.963148 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:50:28 crc kubenswrapper[5043]: E1125 09:50:28.964028 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:50:43 crc kubenswrapper[5043]: I1125 09:50:43.963800 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:50:43 crc kubenswrapper[5043]: E1125 09:50:43.964430 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:50:55 crc kubenswrapper[5043]: I1125 09:50:55.962795 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:50:55 crc kubenswrapper[5043]: E1125 09:50:55.963746 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:50:56 crc kubenswrapper[5043]: E1125 09:50:56.976927 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:51:07 crc kubenswrapper[5043]: I1125 09:51:07.064121 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jntzn"] Nov 25 09:51:07 crc kubenswrapper[5043]: E1125 09:51:07.065395 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b1edf2-d515-45d0-8c61-e7ee07fcde20" containerName="container-00" Nov 25 09:51:07 crc kubenswrapper[5043]: I1125 09:51:07.065415 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b1edf2-d515-45d0-8c61-e7ee07fcde20" containerName="container-00" Nov 25 09:51:07 crc kubenswrapper[5043]: I1125 09:51:07.065799 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b1edf2-d515-45d0-8c61-e7ee07fcde20" containerName="container-00" Nov 25 09:51:07 crc kubenswrapper[5043]: I1125 09:51:07.068370 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jntzn" Nov 25 09:51:07 crc kubenswrapper[5043]: I1125 09:51:07.074719 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jntzn"] Nov 25 09:51:07 crc kubenswrapper[5043]: I1125 09:51:07.215898 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cbafe6-deb3-4e58-b323-6754e89909ee-catalog-content\") pod \"redhat-operators-jntzn\" (UID: \"33cbafe6-deb3-4e58-b323-6754e89909ee\") " pod="openshift-marketplace/redhat-operators-jntzn" Nov 25 09:51:07 crc kubenswrapper[5043]: I1125 09:51:07.216047 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cbafe6-deb3-4e58-b323-6754e89909ee-utilities\") pod \"redhat-operators-jntzn\" (UID: \"33cbafe6-deb3-4e58-b323-6754e89909ee\") " pod="openshift-marketplace/redhat-operators-jntzn" Nov 25 09:51:07 crc kubenswrapper[5043]: I1125 09:51:07.216190 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw254\" (UniqueName: \"kubernetes.io/projected/33cbafe6-deb3-4e58-b323-6754e89909ee-kube-api-access-cw254\") pod \"redhat-operators-jntzn\" (UID: \"33cbafe6-deb3-4e58-b323-6754e89909ee\") " pod="openshift-marketplace/redhat-operators-jntzn" Nov 25 09:51:07 crc kubenswrapper[5043]: I1125 09:51:07.318090 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw254\" (UniqueName: \"kubernetes.io/projected/33cbafe6-deb3-4e58-b323-6754e89909ee-kube-api-access-cw254\") pod \"redhat-operators-jntzn\" (UID: \"33cbafe6-deb3-4e58-b323-6754e89909ee\") " pod="openshift-marketplace/redhat-operators-jntzn" Nov 25 09:51:07 crc kubenswrapper[5043]: I1125 09:51:07.318167 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cbafe6-deb3-4e58-b323-6754e89909ee-catalog-content\") pod \"redhat-operators-jntzn\" (UID: \"33cbafe6-deb3-4e58-b323-6754e89909ee\") " pod="openshift-marketplace/redhat-operators-jntzn" Nov 25 09:51:07 crc kubenswrapper[5043]: I1125 09:51:07.318255 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cbafe6-deb3-4e58-b323-6754e89909ee-utilities\") pod \"redhat-operators-jntzn\" (UID: \"33cbafe6-deb3-4e58-b323-6754e89909ee\") " pod="openshift-marketplace/redhat-operators-jntzn" Nov 25 09:51:07 crc kubenswrapper[5043]: I1125 09:51:07.318858 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cbafe6-deb3-4e58-b323-6754e89909ee-utilities\") pod \"redhat-operators-jntzn\" (UID: \"33cbafe6-deb3-4e58-b323-6754e89909ee\") " pod="openshift-marketplace/redhat-operators-jntzn" Nov 25 09:51:07 crc kubenswrapper[5043]: I1125 09:51:07.319125 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cbafe6-deb3-4e58-b323-6754e89909ee-catalog-content\") pod \"redhat-operators-jntzn\" (UID: \"33cbafe6-deb3-4e58-b323-6754e89909ee\") " pod="openshift-marketplace/redhat-operators-jntzn" Nov 25 09:51:07 crc kubenswrapper[5043]: I1125 09:51:07.353031 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw254\" (UniqueName: \"kubernetes.io/projected/33cbafe6-deb3-4e58-b323-6754e89909ee-kube-api-access-cw254\") pod \"redhat-operators-jntzn\" (UID: \"33cbafe6-deb3-4e58-b323-6754e89909ee\") " pod="openshift-marketplace/redhat-operators-jntzn" Nov 25 09:51:07 crc kubenswrapper[5043]: I1125 09:51:07.388743 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jntzn" Nov 25 09:51:07 crc kubenswrapper[5043]: I1125 09:51:07.858376 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jntzn"] Nov 25 09:51:08 crc kubenswrapper[5043]: I1125 09:51:08.666053 5043 generic.go:334] "Generic (PLEG): container finished" podID="33cbafe6-deb3-4e58-b323-6754e89909ee" containerID="c6e1921b479a353abe153164409973a34fd27706c9694a3aaa2f46b09a91dbe3" exitCode=0 Nov 25 09:51:08 crc kubenswrapper[5043]: I1125 09:51:08.666154 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jntzn" event={"ID":"33cbafe6-deb3-4e58-b323-6754e89909ee","Type":"ContainerDied","Data":"c6e1921b479a353abe153164409973a34fd27706c9694a3aaa2f46b09a91dbe3"} Nov 25 09:51:08 crc kubenswrapper[5043]: I1125 09:51:08.666407 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jntzn" event={"ID":"33cbafe6-deb3-4e58-b323-6754e89909ee","Type":"ContainerStarted","Data":"4fc82d8d8a752a3c0288a7bbc2ac50c5dc9df97236aa06990686aa9b0dbf83af"} Nov 25 09:51:08 crc kubenswrapper[5043]: I1125 09:51:08.668947 5043 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 09:51:10 crc kubenswrapper[5043]: I1125 09:51:10.685516 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jntzn" event={"ID":"33cbafe6-deb3-4e58-b323-6754e89909ee","Type":"ContainerStarted","Data":"4fc474b9a2dda836e143e0bbd53543c0a089b5aad149c9bff1b609707787c571"} Nov 25 09:51:10 crc kubenswrapper[5043]: I1125 09:51:10.963201 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:51:10 crc kubenswrapper[5043]: E1125 09:51:10.963757 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:51:16 crc kubenswrapper[5043]: I1125 09:51:16.747338 5043 generic.go:334] "Generic (PLEG): container finished" podID="33cbafe6-deb3-4e58-b323-6754e89909ee" containerID="4fc474b9a2dda836e143e0bbd53543c0a089b5aad149c9bff1b609707787c571" exitCode=0 Nov 25 09:51:16 crc kubenswrapper[5043]: I1125 09:51:16.747420 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jntzn" event={"ID":"33cbafe6-deb3-4e58-b323-6754e89909ee","Type":"ContainerDied","Data":"4fc474b9a2dda836e143e0bbd53543c0a089b5aad149c9bff1b609707787c571"} Nov 25 09:51:17 crc kubenswrapper[5043]: I1125 09:51:17.760237 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jntzn" event={"ID":"33cbafe6-deb3-4e58-b323-6754e89909ee","Type":"ContainerStarted","Data":"2a579a373673541ba23eebbfff7d73759b436f856ac0dfb7a14cbf1a27566347"} Nov 25 09:51:23 crc kubenswrapper[5043]: I1125 09:51:23.962421 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:51:23 crc kubenswrapper[5043]: E1125 09:51:23.963213 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:51:27 crc kubenswrapper[5043]: I1125 09:51:27.389824 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jntzn" Nov 25 09:51:27 crc kubenswrapper[5043]: I1125 09:51:27.390426 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jntzn" Nov 25 09:51:27 crc kubenswrapper[5043]: I1125 09:51:27.444222 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jntzn" Nov 25 09:51:27 crc kubenswrapper[5043]: I1125 09:51:27.465362 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jntzn" podStartSLOduration=11.794977279 podStartE2EDuration="20.465343075s" podCreationTimestamp="2025-11-25 09:51:07 +0000 UTC" firstStartedPulling="2025-11-25 09:51:08.668660375 +0000 UTC m=+9332.836856106" lastFinishedPulling="2025-11-25 09:51:17.339026181 +0000 UTC m=+9341.507221902" observedRunningTime="2025-11-25 09:51:17.784158681 +0000 UTC m=+9341.952354402" watchObservedRunningTime="2025-11-25 09:51:27.465343075 +0000 UTC m=+9351.633538796" Nov 25 09:51:27 crc kubenswrapper[5043]: I1125 09:51:27.905113 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jntzn" Nov 25 09:51:27 crc kubenswrapper[5043]: I1125 09:51:27.956843 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jntzn"] Nov 25 09:51:29 crc kubenswrapper[5043]: I1125 09:51:29.868319 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jntzn" podUID="33cbafe6-deb3-4e58-b323-6754e89909ee" containerName="registry-server" containerID="cri-o://2a579a373673541ba23eebbfff7d73759b436f856ac0dfb7a14cbf1a27566347" gracePeriod=2 Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.455356 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jntzn" Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.609302 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cbafe6-deb3-4e58-b323-6754e89909ee-utilities\") pod \"33cbafe6-deb3-4e58-b323-6754e89909ee\" (UID: \"33cbafe6-deb3-4e58-b323-6754e89909ee\") " Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.609387 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cbafe6-deb3-4e58-b323-6754e89909ee-catalog-content\") pod \"33cbafe6-deb3-4e58-b323-6754e89909ee\" (UID: \"33cbafe6-deb3-4e58-b323-6754e89909ee\") " Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.609646 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw254\" (UniqueName: \"kubernetes.io/projected/33cbafe6-deb3-4e58-b323-6754e89909ee-kube-api-access-cw254\") pod \"33cbafe6-deb3-4e58-b323-6754e89909ee\" (UID: \"33cbafe6-deb3-4e58-b323-6754e89909ee\") " Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.610390 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33cbafe6-deb3-4e58-b323-6754e89909ee-utilities" (OuterVolumeSpecName: "utilities") pod "33cbafe6-deb3-4e58-b323-6754e89909ee" (UID: "33cbafe6-deb3-4e58-b323-6754e89909ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.615026 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33cbafe6-deb3-4e58-b323-6754e89909ee-kube-api-access-cw254" (OuterVolumeSpecName: "kube-api-access-cw254") pod "33cbafe6-deb3-4e58-b323-6754e89909ee" (UID: "33cbafe6-deb3-4e58-b323-6754e89909ee"). InnerVolumeSpecName "kube-api-access-cw254". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.712107 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33cbafe6-deb3-4e58-b323-6754e89909ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33cbafe6-deb3-4e58-b323-6754e89909ee" (UID: "33cbafe6-deb3-4e58-b323-6754e89909ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.712820 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cbafe6-deb3-4e58-b323-6754e89909ee-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.712862 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cbafe6-deb3-4e58-b323-6754e89909ee-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.712882 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw254\" (UniqueName: \"kubernetes.io/projected/33cbafe6-deb3-4e58-b323-6754e89909ee-kube-api-access-cw254\") on node \"crc\" DevicePath \"\"" Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.879710 5043 generic.go:334] "Generic (PLEG): container finished" podID="33cbafe6-deb3-4e58-b323-6754e89909ee" containerID="2a579a373673541ba23eebbfff7d73759b436f856ac0dfb7a14cbf1a27566347" exitCode=0 Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.879751 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jntzn" event={"ID":"33cbafe6-deb3-4e58-b323-6754e89909ee","Type":"ContainerDied","Data":"2a579a373673541ba23eebbfff7d73759b436f856ac0dfb7a14cbf1a27566347"} Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.879781 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jntzn" event={"ID":"33cbafe6-deb3-4e58-b323-6754e89909ee","Type":"ContainerDied","Data":"4fc82d8d8a752a3c0288a7bbc2ac50c5dc9df97236aa06990686aa9b0dbf83af"} Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.879783 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jntzn" Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.879798 5043 scope.go:117] "RemoveContainer" containerID="2a579a373673541ba23eebbfff7d73759b436f856ac0dfb7a14cbf1a27566347" Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.900950 5043 scope.go:117] "RemoveContainer" containerID="4fc474b9a2dda836e143e0bbd53543c0a089b5aad149c9bff1b609707787c571" Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.928901 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jntzn"] Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.936702 5043 scope.go:117] "RemoveContainer" containerID="c6e1921b479a353abe153164409973a34fd27706c9694a3aaa2f46b09a91dbe3" Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.937393 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jntzn"] Nov 25 09:51:30 crc kubenswrapper[5043]: I1125 09:51:30.983883 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33cbafe6-deb3-4e58-b323-6754e89909ee" path="/var/lib/kubelet/pods/33cbafe6-deb3-4e58-b323-6754e89909ee/volumes" Nov 25 09:51:31 crc kubenswrapper[5043]: I1125 09:51:31.030574 5043 scope.go:117] "RemoveContainer" containerID="2a579a373673541ba23eebbfff7d73759b436f856ac0dfb7a14cbf1a27566347" Nov 25 09:51:31 crc kubenswrapper[5043]: E1125 09:51:31.031743 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a579a373673541ba23eebbfff7d73759b436f856ac0dfb7a14cbf1a27566347\": container with ID starting with 2a579a373673541ba23eebbfff7d73759b436f856ac0dfb7a14cbf1a27566347 not found: ID does not exist" containerID="2a579a373673541ba23eebbfff7d73759b436f856ac0dfb7a14cbf1a27566347" Nov 25 09:51:31 crc kubenswrapper[5043]: I1125 09:51:31.031805 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a579a373673541ba23eebbfff7d73759b436f856ac0dfb7a14cbf1a27566347"} err="failed to get container status \"2a579a373673541ba23eebbfff7d73759b436f856ac0dfb7a14cbf1a27566347\": rpc error: code = NotFound desc = could not find container \"2a579a373673541ba23eebbfff7d73759b436f856ac0dfb7a14cbf1a27566347\": container with ID starting with 2a579a373673541ba23eebbfff7d73759b436f856ac0dfb7a14cbf1a27566347 not found: ID does not exist" Nov 25 09:51:31 crc kubenswrapper[5043]: I1125 09:51:31.031829 5043 scope.go:117] "RemoveContainer" containerID="4fc474b9a2dda836e143e0bbd53543c0a089b5aad149c9bff1b609707787c571" Nov 25 09:51:31 crc kubenswrapper[5043]: E1125 09:51:31.032399 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc474b9a2dda836e143e0bbd53543c0a089b5aad149c9bff1b609707787c571\": container with ID starting with 4fc474b9a2dda836e143e0bbd53543c0a089b5aad149c9bff1b609707787c571 not found: ID does not exist" containerID="4fc474b9a2dda836e143e0bbd53543c0a089b5aad149c9bff1b609707787c571" Nov 25 09:51:31 crc kubenswrapper[5043]: I1125 09:51:31.032427 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc474b9a2dda836e143e0bbd53543c0a089b5aad149c9bff1b609707787c571"} err="failed to get container status \"4fc474b9a2dda836e143e0bbd53543c0a089b5aad149c9bff1b609707787c571\": rpc error: code = NotFound desc = could not find container \"4fc474b9a2dda836e143e0bbd53543c0a089b5aad149c9bff1b609707787c571\": container with ID starting with 4fc474b9a2dda836e143e0bbd53543c0a089b5aad149c9bff1b609707787c571 not found: ID does not exist" Nov 25 09:51:31 crc kubenswrapper[5043]: I1125 09:51:31.032443 5043 scope.go:117] "RemoveContainer" containerID="c6e1921b479a353abe153164409973a34fd27706c9694a3aaa2f46b09a91dbe3" Nov 25 09:51:31 crc kubenswrapper[5043]: E1125 09:51:31.032932 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6e1921b479a353abe153164409973a34fd27706c9694a3aaa2f46b09a91dbe3\": container with ID starting with c6e1921b479a353abe153164409973a34fd27706c9694a3aaa2f46b09a91dbe3 not found: ID does not exist" containerID="c6e1921b479a353abe153164409973a34fd27706c9694a3aaa2f46b09a91dbe3" Nov 25 09:51:31 crc kubenswrapper[5043]: I1125 09:51:31.032988 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6e1921b479a353abe153164409973a34fd27706c9694a3aaa2f46b09a91dbe3"} err="failed to get container status \"c6e1921b479a353abe153164409973a34fd27706c9694a3aaa2f46b09a91dbe3\": rpc error: code = NotFound desc = could not find container \"c6e1921b479a353abe153164409973a34fd27706c9694a3aaa2f46b09a91dbe3\": container with ID starting with c6e1921b479a353abe153164409973a34fd27706c9694a3aaa2f46b09a91dbe3 not found: ID does not exist" Nov 25 09:51:37 crc kubenswrapper[5043]: I1125 09:51:37.986263 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:51:37 crc kubenswrapper[5043]: E1125 09:51:37.987321 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:51:49 crc kubenswrapper[5043]: I1125 09:51:49.962284 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:51:49 crc kubenswrapper[5043]: E1125 09:51:49.962936 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:52:03 crc kubenswrapper[5043]: I1125 09:52:03.962574 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:52:03 crc kubenswrapper[5043]: E1125 09:52:03.962596 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:52:03 crc kubenswrapper[5043]: E1125 09:52:03.963402 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:52:15 crc kubenswrapper[5043]: I1125 09:52:15.317834 5043 generic.go:334] "Generic (PLEG): container finished" podID="e6d8ca35-68e8-408a-8afc-41261daaab5d" containerID="337fc63fbdb65f0716d39efe59a69b71c0173cd2fb633c46e28021f0d2bee904" exitCode=0 Nov 25 09:52:15 crc kubenswrapper[5043]: I1125 09:52:15.317907 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v2z9h/must-gather-dztv7" event={"ID":"e6d8ca35-68e8-408a-8afc-41261daaab5d","Type":"ContainerDied","Data":"337fc63fbdb65f0716d39efe59a69b71c0173cd2fb633c46e28021f0d2bee904"} Nov 25 09:52:15 crc kubenswrapper[5043]: I1125 09:52:15.319307 5043 scope.go:117] "RemoveContainer" containerID="337fc63fbdb65f0716d39efe59a69b71c0173cd2fb633c46e28021f0d2bee904" Nov 25 09:52:16 crc kubenswrapper[5043]: I1125 09:52:16.222294 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v2z9h_must-gather-dztv7_e6d8ca35-68e8-408a-8afc-41261daaab5d/gather/0.log" Nov 25 09:52:16 crc kubenswrapper[5043]: I1125 09:52:16.987977 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:52:16 crc kubenswrapper[5043]: E1125 09:52:16.989402 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:52:29 crc kubenswrapper[5043]: I1125 09:52:29.694328 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v2z9h/must-gather-dztv7"] Nov 25 09:52:29 crc kubenswrapper[5043]: I1125 09:52:29.695661 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-v2z9h/must-gather-dztv7" podUID="e6d8ca35-68e8-408a-8afc-41261daaab5d" containerName="copy" containerID="cri-o://42ea2621f4bc02ddbe8eaf4c25d559e8b7a1f37ebb6ac5234f7346fa986956f6" gracePeriod=2 Nov 25 09:52:29 crc kubenswrapper[5043]: I1125 09:52:29.714085 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v2z9h/must-gather-dztv7"] Nov 25 09:52:29 crc kubenswrapper[5043]: I1125 09:52:29.964966 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:52:29 crc kubenswrapper[5043]: E1125 09:52:29.965285 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.236214 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v2z9h_must-gather-dztv7_e6d8ca35-68e8-408a-8afc-41261daaab5d/copy/0.log" Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.237571 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v2z9h/must-gather-dztv7" Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.283856 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg7lj\" (UniqueName: \"kubernetes.io/projected/e6d8ca35-68e8-408a-8afc-41261daaab5d-kube-api-access-cg7lj\") pod \"e6d8ca35-68e8-408a-8afc-41261daaab5d\" (UID: \"e6d8ca35-68e8-408a-8afc-41261daaab5d\") " Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.284018 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6d8ca35-68e8-408a-8afc-41261daaab5d-must-gather-output\") pod \"e6d8ca35-68e8-408a-8afc-41261daaab5d\" (UID: \"e6d8ca35-68e8-408a-8afc-41261daaab5d\") " Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.306500 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d8ca35-68e8-408a-8afc-41261daaab5d-kube-api-access-cg7lj" (OuterVolumeSpecName: "kube-api-access-cg7lj") pod "e6d8ca35-68e8-408a-8afc-41261daaab5d" (UID: "e6d8ca35-68e8-408a-8afc-41261daaab5d"). InnerVolumeSpecName "kube-api-access-cg7lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.386883 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg7lj\" (UniqueName: \"kubernetes.io/projected/e6d8ca35-68e8-408a-8afc-41261daaab5d-kube-api-access-cg7lj\") on node \"crc\" DevicePath \"\"" Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.482177 5043 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v2z9h_must-gather-dztv7_e6d8ca35-68e8-408a-8afc-41261daaab5d/copy/0.log" Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.483389 5043 generic.go:334] "Generic (PLEG): container finished" podID="e6d8ca35-68e8-408a-8afc-41261daaab5d" containerID="42ea2621f4bc02ddbe8eaf4c25d559e8b7a1f37ebb6ac5234f7346fa986956f6" exitCode=143 Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.483474 5043 scope.go:117] "RemoveContainer" containerID="42ea2621f4bc02ddbe8eaf4c25d559e8b7a1f37ebb6ac5234f7346fa986956f6" Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.483521 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v2z9h/must-gather-dztv7" Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.505132 5043 scope.go:117] "RemoveContainer" containerID="337fc63fbdb65f0716d39efe59a69b71c0173cd2fb633c46e28021f0d2bee904" Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.507120 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6d8ca35-68e8-408a-8afc-41261daaab5d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e6d8ca35-68e8-408a-8afc-41261daaab5d" (UID: "e6d8ca35-68e8-408a-8afc-41261daaab5d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.595432 5043 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e6d8ca35-68e8-408a-8afc-41261daaab5d-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.605122 5043 scope.go:117] "RemoveContainer" containerID="42ea2621f4bc02ddbe8eaf4c25d559e8b7a1f37ebb6ac5234f7346fa986956f6" Nov 25 09:52:30 crc kubenswrapper[5043]: E1125 09:52:30.610076 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42ea2621f4bc02ddbe8eaf4c25d559e8b7a1f37ebb6ac5234f7346fa986956f6\": container with ID starting with 42ea2621f4bc02ddbe8eaf4c25d559e8b7a1f37ebb6ac5234f7346fa986956f6 not found: ID does not exist" containerID="42ea2621f4bc02ddbe8eaf4c25d559e8b7a1f37ebb6ac5234f7346fa986956f6" Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.610145 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42ea2621f4bc02ddbe8eaf4c25d559e8b7a1f37ebb6ac5234f7346fa986956f6"} err="failed to get container status \"42ea2621f4bc02ddbe8eaf4c25d559e8b7a1f37ebb6ac5234f7346fa986956f6\": rpc error: code = NotFound desc = could not find container \"42ea2621f4bc02ddbe8eaf4c25d559e8b7a1f37ebb6ac5234f7346fa986956f6\": container with ID starting with 42ea2621f4bc02ddbe8eaf4c25d559e8b7a1f37ebb6ac5234f7346fa986956f6 not found: ID does not exist" Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.610200 5043 scope.go:117] "RemoveContainer" containerID="337fc63fbdb65f0716d39efe59a69b71c0173cd2fb633c46e28021f0d2bee904" Nov 25 09:52:30 crc kubenswrapper[5043]: E1125 09:52:30.610668 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"337fc63fbdb65f0716d39efe59a69b71c0173cd2fb633c46e28021f0d2bee904\": container with ID starting with 337fc63fbdb65f0716d39efe59a69b71c0173cd2fb633c46e28021f0d2bee904 not found: ID does not exist" containerID="337fc63fbdb65f0716d39efe59a69b71c0173cd2fb633c46e28021f0d2bee904" Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.610721 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"337fc63fbdb65f0716d39efe59a69b71c0173cd2fb633c46e28021f0d2bee904"} err="failed to get container status \"337fc63fbdb65f0716d39efe59a69b71c0173cd2fb633c46e28021f0d2bee904\": rpc error: code = NotFound desc = could not find container \"337fc63fbdb65f0716d39efe59a69b71c0173cd2fb633c46e28021f0d2bee904\": container with ID starting with 337fc63fbdb65f0716d39efe59a69b71c0173cd2fb633c46e28021f0d2bee904 not found: ID does not exist" Nov 25 09:52:30 crc kubenswrapper[5043]: I1125 09:52:30.974854 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6d8ca35-68e8-408a-8afc-41261daaab5d" path="/var/lib/kubelet/pods/e6d8ca35-68e8-408a-8afc-41261daaab5d/volumes" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.352916 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sqzhz"] Nov 25 09:52:32 crc kubenswrapper[5043]: E1125 09:52:32.353805 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cbafe6-deb3-4e58-b323-6754e89909ee" containerName="extract-utilities" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.353823 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cbafe6-deb3-4e58-b323-6754e89909ee" containerName="extract-utilities" Nov 25 09:52:32 crc kubenswrapper[5043]: E1125 09:52:32.353848 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d8ca35-68e8-408a-8afc-41261daaab5d" containerName="copy" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.353856 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d8ca35-68e8-408a-8afc-41261daaab5d" containerName="copy" Nov 25 09:52:32 crc kubenswrapper[5043]: E1125 09:52:32.353872 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d8ca35-68e8-408a-8afc-41261daaab5d" containerName="gather" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.353881 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d8ca35-68e8-408a-8afc-41261daaab5d" containerName="gather" Nov 25 09:52:32 crc kubenswrapper[5043]: E1125 09:52:32.353894 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cbafe6-deb3-4e58-b323-6754e89909ee" containerName="extract-content" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.353901 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cbafe6-deb3-4e58-b323-6754e89909ee" containerName="extract-content" Nov 25 09:52:32 crc kubenswrapper[5043]: E1125 09:52:32.353911 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cbafe6-deb3-4e58-b323-6754e89909ee" containerName="registry-server" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.353918 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cbafe6-deb3-4e58-b323-6754e89909ee" containerName="registry-server" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.354155 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="33cbafe6-deb3-4e58-b323-6754e89909ee" containerName="registry-server" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.354168 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d8ca35-68e8-408a-8afc-41261daaab5d" containerName="copy" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.354182 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d8ca35-68e8-408a-8afc-41261daaab5d" containerName="gather" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.355872 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqzhz" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.379745 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqzhz"] Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.434291 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72c417e2-fd3b-4bda-b875-b2a68118c384-catalog-content\") pod \"redhat-marketplace-sqzhz\" (UID: \"72c417e2-fd3b-4bda-b875-b2a68118c384\") " pod="openshift-marketplace/redhat-marketplace-sqzhz" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.434370 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72c417e2-fd3b-4bda-b875-b2a68118c384-utilities\") pod \"redhat-marketplace-sqzhz\" (UID: \"72c417e2-fd3b-4bda-b875-b2a68118c384\") " pod="openshift-marketplace/redhat-marketplace-sqzhz" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.434402 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph55w\" (UniqueName: \"kubernetes.io/projected/72c417e2-fd3b-4bda-b875-b2a68118c384-kube-api-access-ph55w\") pod \"redhat-marketplace-sqzhz\" (UID: \"72c417e2-fd3b-4bda-b875-b2a68118c384\") " pod="openshift-marketplace/redhat-marketplace-sqzhz" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.537012 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72c417e2-fd3b-4bda-b875-b2a68118c384-utilities\") pod \"redhat-marketplace-sqzhz\" (UID: \"72c417e2-fd3b-4bda-b875-b2a68118c384\") " pod="openshift-marketplace/redhat-marketplace-sqzhz" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.537080 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph55w\" (UniqueName: \"kubernetes.io/projected/72c417e2-fd3b-4bda-b875-b2a68118c384-kube-api-access-ph55w\") pod \"redhat-marketplace-sqzhz\" (UID: \"72c417e2-fd3b-4bda-b875-b2a68118c384\") " pod="openshift-marketplace/redhat-marketplace-sqzhz" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.537223 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72c417e2-fd3b-4bda-b875-b2a68118c384-catalog-content\") pod \"redhat-marketplace-sqzhz\" (UID: \"72c417e2-fd3b-4bda-b875-b2a68118c384\") " pod="openshift-marketplace/redhat-marketplace-sqzhz" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.537640 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72c417e2-fd3b-4bda-b875-b2a68118c384-utilities\") pod \"redhat-marketplace-sqzhz\" (UID: \"72c417e2-fd3b-4bda-b875-b2a68118c384\") " pod="openshift-marketplace/redhat-marketplace-sqzhz" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.537674 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72c417e2-fd3b-4bda-b875-b2a68118c384-catalog-content\") pod \"redhat-marketplace-sqzhz\" (UID: \"72c417e2-fd3b-4bda-b875-b2a68118c384\") " pod="openshift-marketplace/redhat-marketplace-sqzhz" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.556991 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph55w\" (UniqueName: \"kubernetes.io/projected/72c417e2-fd3b-4bda-b875-b2a68118c384-kube-api-access-ph55w\") pod \"redhat-marketplace-sqzhz\" (UID: \"72c417e2-fd3b-4bda-b875-b2a68118c384\") " pod="openshift-marketplace/redhat-marketplace-sqzhz" Nov 25 09:52:32 crc kubenswrapper[5043]: I1125 09:52:32.680108 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqzhz" Nov 25 09:52:33 crc kubenswrapper[5043]: I1125 09:52:33.192106 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqzhz"] Nov 25 09:52:33 crc kubenswrapper[5043]: I1125 09:52:33.515294 5043 generic.go:334] "Generic (PLEG): container finished" podID="72c417e2-fd3b-4bda-b875-b2a68118c384" containerID="2f3da909d9719f1fe9593899460e420e26b464393a793e7ed113ff1ed7188e8e" exitCode=0 Nov 25 09:52:33 crc kubenswrapper[5043]: I1125 09:52:33.515576 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqzhz" event={"ID":"72c417e2-fd3b-4bda-b875-b2a68118c384","Type":"ContainerDied","Data":"2f3da909d9719f1fe9593899460e420e26b464393a793e7ed113ff1ed7188e8e"} Nov 25 09:52:33 crc kubenswrapper[5043]: I1125 09:52:33.515634 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqzhz" event={"ID":"72c417e2-fd3b-4bda-b875-b2a68118c384","Type":"ContainerStarted","Data":"614e75418f21a473932e3fbea9567b7992bd224bce433fc0a6b5b1c6400e15eb"} Nov 25 09:52:35 crc kubenswrapper[5043]: I1125 09:52:35.538759 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqzhz" event={"ID":"72c417e2-fd3b-4bda-b875-b2a68118c384","Type":"ContainerStarted","Data":"ee561fd9d7f784054c44f8e41ed70d06790fb615e02441ab0118caf072e6a8ef"} Nov 25 09:52:37 crc kubenswrapper[5043]: I1125 09:52:37.559888 5043 generic.go:334] "Generic (PLEG): container finished" podID="72c417e2-fd3b-4bda-b875-b2a68118c384" containerID="ee561fd9d7f784054c44f8e41ed70d06790fb615e02441ab0118caf072e6a8ef" exitCode=0 Nov 25 09:52:37 crc kubenswrapper[5043]: I1125 09:52:37.559979 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqzhz" event={"ID":"72c417e2-fd3b-4bda-b875-b2a68118c384","Type":"ContainerDied","Data":"ee561fd9d7f784054c44f8e41ed70d06790fb615e02441ab0118caf072e6a8ef"} Nov 25 09:52:38 crc kubenswrapper[5043]: I1125 09:52:38.571411 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqzhz" event={"ID":"72c417e2-fd3b-4bda-b875-b2a68118c384","Type":"ContainerStarted","Data":"ec5220803769e65030d445a9e969476bf432eb82d0bd5ef4a399277b4473c9a9"} Nov 25 09:52:38 crc kubenswrapper[5043]: I1125 09:52:38.597479 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sqzhz" podStartSLOduration=2.095797183 podStartE2EDuration="6.597463805s" podCreationTimestamp="2025-11-25 09:52:32 +0000 UTC" firstStartedPulling="2025-11-25 09:52:33.517458224 +0000 UTC m=+9417.685653945" lastFinishedPulling="2025-11-25 09:52:38.019124846 +0000 UTC m=+9422.187320567" observedRunningTime="2025-11-25 09:52:38.59393235 +0000 UTC m=+9422.762128071" watchObservedRunningTime="2025-11-25 09:52:38.597463805 +0000 UTC m=+9422.765659516" Nov 25 09:52:41 crc kubenswrapper[5043]: I1125 09:52:41.962537 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:52:41 crc kubenswrapper[5043]: E1125 09:52:41.963166 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:52:42 crc kubenswrapper[5043]: I1125 09:52:42.681125 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sqzhz" Nov 25 09:52:42 crc kubenswrapper[5043]: I1125 09:52:42.681542 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sqzhz" Nov 25 09:52:42 crc kubenswrapper[5043]: I1125 09:52:42.730081 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sqzhz" Nov 25 09:52:43 crc kubenswrapper[5043]: I1125 09:52:43.659578 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sqzhz" Nov 25 09:52:43 crc kubenswrapper[5043]: I1125 09:52:43.706691 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqzhz"] Nov 25 09:52:45 crc kubenswrapper[5043]: I1125 09:52:45.630424 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sqzhz" podUID="72c417e2-fd3b-4bda-b875-b2a68118c384" containerName="registry-server" containerID="cri-o://ec5220803769e65030d445a9e969476bf432eb82d0bd5ef4a399277b4473c9a9" gracePeriod=2 Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.210488 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqzhz" Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.330208 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72c417e2-fd3b-4bda-b875-b2a68118c384-catalog-content\") pod \"72c417e2-fd3b-4bda-b875-b2a68118c384\" (UID: \"72c417e2-fd3b-4bda-b875-b2a68118c384\") " Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.330294 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph55w\" (UniqueName: \"kubernetes.io/projected/72c417e2-fd3b-4bda-b875-b2a68118c384-kube-api-access-ph55w\") pod \"72c417e2-fd3b-4bda-b875-b2a68118c384\" (UID: \"72c417e2-fd3b-4bda-b875-b2a68118c384\") " Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.330344 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72c417e2-fd3b-4bda-b875-b2a68118c384-utilities\") pod \"72c417e2-fd3b-4bda-b875-b2a68118c384\" (UID: \"72c417e2-fd3b-4bda-b875-b2a68118c384\") " Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.331885 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72c417e2-fd3b-4bda-b875-b2a68118c384-utilities" (OuterVolumeSpecName: "utilities") pod "72c417e2-fd3b-4bda-b875-b2a68118c384" (UID: "72c417e2-fd3b-4bda-b875-b2a68118c384"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.348898 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c417e2-fd3b-4bda-b875-b2a68118c384-kube-api-access-ph55w" (OuterVolumeSpecName: "kube-api-access-ph55w") pod "72c417e2-fd3b-4bda-b875-b2a68118c384" (UID: "72c417e2-fd3b-4bda-b875-b2a68118c384"). InnerVolumeSpecName "kube-api-access-ph55w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.357009 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72c417e2-fd3b-4bda-b875-b2a68118c384-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72c417e2-fd3b-4bda-b875-b2a68118c384" (UID: "72c417e2-fd3b-4bda-b875-b2a68118c384"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.432970 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72c417e2-fd3b-4bda-b875-b2a68118c384-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.433227 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph55w\" (UniqueName: \"kubernetes.io/projected/72c417e2-fd3b-4bda-b875-b2a68118c384-kube-api-access-ph55w\") on node \"crc\" DevicePath \"\"" Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.433327 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72c417e2-fd3b-4bda-b875-b2a68118c384-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.658716 5043 generic.go:334] "Generic (PLEG): container finished" podID="72c417e2-fd3b-4bda-b875-b2a68118c384" containerID="ec5220803769e65030d445a9e969476bf432eb82d0bd5ef4a399277b4473c9a9" exitCode=0 Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.658764 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqzhz" event={"ID":"72c417e2-fd3b-4bda-b875-b2a68118c384","Type":"ContainerDied","Data":"ec5220803769e65030d445a9e969476bf432eb82d0bd5ef4a399277b4473c9a9"} Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.658796 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sqzhz" event={"ID":"72c417e2-fd3b-4bda-b875-b2a68118c384","Type":"ContainerDied","Data":"614e75418f21a473932e3fbea9567b7992bd224bce433fc0a6b5b1c6400e15eb"} Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.658813 5043 scope.go:117] "RemoveContainer" containerID="ec5220803769e65030d445a9e969476bf432eb82d0bd5ef4a399277b4473c9a9" Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.658810 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sqzhz" Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.687738 5043 scope.go:117] "RemoveContainer" containerID="ee561fd9d7f784054c44f8e41ed70d06790fb615e02441ab0118caf072e6a8ef" Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.704883 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqzhz"] Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.720271 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sqzhz"] Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.742696 5043 scope.go:117] "RemoveContainer" containerID="2f3da909d9719f1fe9593899460e420e26b464393a793e7ed113ff1ed7188e8e" Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.782474 5043 scope.go:117] "RemoveContainer" containerID="ec5220803769e65030d445a9e969476bf432eb82d0bd5ef4a399277b4473c9a9" Nov 25 09:52:46 crc kubenswrapper[5043]: E1125 09:52:46.783741 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec5220803769e65030d445a9e969476bf432eb82d0bd5ef4a399277b4473c9a9\": container with ID starting with ec5220803769e65030d445a9e969476bf432eb82d0bd5ef4a399277b4473c9a9 not found: ID does not exist" containerID="ec5220803769e65030d445a9e969476bf432eb82d0bd5ef4a399277b4473c9a9" Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.783784 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec5220803769e65030d445a9e969476bf432eb82d0bd5ef4a399277b4473c9a9"} err="failed to get container status \"ec5220803769e65030d445a9e969476bf432eb82d0bd5ef4a399277b4473c9a9\": rpc error: code = NotFound desc = could not find container \"ec5220803769e65030d445a9e969476bf432eb82d0bd5ef4a399277b4473c9a9\": container with ID starting with ec5220803769e65030d445a9e969476bf432eb82d0bd5ef4a399277b4473c9a9 not found: ID does not exist" Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.783812 5043 scope.go:117] "RemoveContainer" containerID="ee561fd9d7f784054c44f8e41ed70d06790fb615e02441ab0118caf072e6a8ef" Nov 25 09:52:46 crc kubenswrapper[5043]: E1125 09:52:46.784282 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee561fd9d7f784054c44f8e41ed70d06790fb615e02441ab0118caf072e6a8ef\": container with ID starting with ee561fd9d7f784054c44f8e41ed70d06790fb615e02441ab0118caf072e6a8ef not found: ID does not exist" containerID="ee561fd9d7f784054c44f8e41ed70d06790fb615e02441ab0118caf072e6a8ef" Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.784329 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee561fd9d7f784054c44f8e41ed70d06790fb615e02441ab0118caf072e6a8ef"} err="failed to get container status \"ee561fd9d7f784054c44f8e41ed70d06790fb615e02441ab0118caf072e6a8ef\": rpc error: code = NotFound desc = could not find container \"ee561fd9d7f784054c44f8e41ed70d06790fb615e02441ab0118caf072e6a8ef\": container with ID starting with ee561fd9d7f784054c44f8e41ed70d06790fb615e02441ab0118caf072e6a8ef not found: ID does not exist" Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.784361 5043 scope.go:117] "RemoveContainer" containerID="2f3da909d9719f1fe9593899460e420e26b464393a793e7ed113ff1ed7188e8e" Nov 25 09:52:46 crc kubenswrapper[5043]: E1125 09:52:46.784956 5043 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f3da909d9719f1fe9593899460e420e26b464393a793e7ed113ff1ed7188e8e\": container with ID starting with 2f3da909d9719f1fe9593899460e420e26b464393a793e7ed113ff1ed7188e8e not found: ID does not exist" containerID="2f3da909d9719f1fe9593899460e420e26b464393a793e7ed113ff1ed7188e8e" Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.785118 5043 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f3da909d9719f1fe9593899460e420e26b464393a793e7ed113ff1ed7188e8e"} err="failed to get container status \"2f3da909d9719f1fe9593899460e420e26b464393a793e7ed113ff1ed7188e8e\": rpc error: code = NotFound desc = could not find container \"2f3da909d9719f1fe9593899460e420e26b464393a793e7ed113ff1ed7188e8e\": container with ID starting with 2f3da909d9719f1fe9593899460e420e26b464393a793e7ed113ff1ed7188e8e not found: ID does not exist" Nov 25 09:52:46 crc kubenswrapper[5043]: I1125 09:52:46.973202 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72c417e2-fd3b-4bda-b875-b2a68118c384" path="/var/lib/kubelet/pods/72c417e2-fd3b-4bda-b875-b2a68118c384/volumes" Nov 25 09:52:53 crc kubenswrapper[5043]: I1125 09:52:53.962493 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:52:53 crc kubenswrapper[5043]: E1125 09:52:53.964418 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:52:56 crc kubenswrapper[5043]: I1125 09:52:56.708095 5043 scope.go:117] "RemoveContainer" containerID="f1d9371a5333c94901d454f5396dac57d7c37b081e3a39efb43084cdf355e986" Nov 25 09:52:56 crc kubenswrapper[5043]: I1125 09:52:56.727648 5043 scope.go:117] "RemoveContainer" containerID="725a0e5a1e2991e4ad8c5698e97074345a56d96828054e15a901f0803ca52a1a" Nov 25 09:53:05 crc kubenswrapper[5043]: I1125 09:53:05.962403 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:53:05 crc kubenswrapper[5043]: E1125 09:53:05.963229 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:53:13 crc kubenswrapper[5043]: E1125 09:53:13.965428 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:53:19 crc kubenswrapper[5043]: I1125 09:53:19.963319 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:53:19 crc kubenswrapper[5043]: E1125 09:53:19.964250 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:53:32 crc kubenswrapper[5043]: I1125 09:53:32.963346 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:53:32 crc kubenswrapper[5043]: E1125 09:53:32.964061 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:53:43 crc kubenswrapper[5043]: I1125 09:53:43.962945 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:53:43 crc kubenswrapper[5043]: E1125 09:53:43.963821 5043 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jzwnx_openshift-machine-config-operator(707b7a7f-020e-4719-9db9-7d1f3294b25c)\"" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" Nov 25 09:53:55 crc kubenswrapper[5043]: I1125 09:53:55.963275 5043 scope.go:117] "RemoveContainer" containerID="03e4350c64e291d155433ae5501cab7fa882446c84c9633520f8798e5780aa70" Nov 25 09:53:56 crc kubenswrapper[5043]: I1125 09:53:56.334632 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" event={"ID":"707b7a7f-020e-4719-9db9-7d1f3294b25c","Type":"ContainerStarted","Data":"48a738926e5cfc4fc82b9947f2e0bd0d6a8187d5af0e05618824bb5ef66bbe47"} Nov 25 09:54:43 crc kubenswrapper[5043]: E1125 09:54:43.962547 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:55:05 crc kubenswrapper[5043]: I1125 09:55:05.386731 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2f46h"] Nov 25 09:55:05 crc kubenswrapper[5043]: E1125 09:55:05.389052 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c417e2-fd3b-4bda-b875-b2a68118c384" containerName="extract-utilities" Nov 25 09:55:05 crc kubenswrapper[5043]: I1125 09:55:05.389136 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c417e2-fd3b-4bda-b875-b2a68118c384" containerName="extract-utilities" Nov 25 09:55:05 crc kubenswrapper[5043]: E1125 09:55:05.389242 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c417e2-fd3b-4bda-b875-b2a68118c384" containerName="extract-content" Nov 25 09:55:05 crc kubenswrapper[5043]: I1125 09:55:05.390030 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c417e2-fd3b-4bda-b875-b2a68118c384" containerName="extract-content" Nov 25 09:55:05 crc kubenswrapper[5043]: E1125 09:55:05.390115 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c417e2-fd3b-4bda-b875-b2a68118c384" containerName="registry-server" Nov 25 09:55:05 crc kubenswrapper[5043]: I1125 09:55:05.390171 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c417e2-fd3b-4bda-b875-b2a68118c384" containerName="registry-server" Nov 25 09:55:05 crc kubenswrapper[5043]: I1125 09:55:05.390440 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c417e2-fd3b-4bda-b875-b2a68118c384" containerName="registry-server" Nov 25 09:55:05 crc kubenswrapper[5043]: I1125 09:55:05.392263 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2f46h" Nov 25 09:55:05 crc kubenswrapper[5043]: I1125 09:55:05.412214 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2f46h"] Nov 25 09:55:05 crc kubenswrapper[5043]: I1125 09:55:05.498044 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22cfa26a-0378-40f1-86a5-b31fcf74f219-utilities\") pod \"community-operators-2f46h\" (UID: \"22cfa26a-0378-40f1-86a5-b31fcf74f219\") " pod="openshift-marketplace/community-operators-2f46h" Nov 25 09:55:05 crc kubenswrapper[5043]: I1125 09:55:05.498092 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22cfa26a-0378-40f1-86a5-b31fcf74f219-catalog-content\") pod \"community-operators-2f46h\" (UID: \"22cfa26a-0378-40f1-86a5-b31fcf74f219\") " pod="openshift-marketplace/community-operators-2f46h" Nov 25 09:55:05 crc kubenswrapper[5043]: I1125 09:55:05.498143 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fd7n\" (UniqueName: \"kubernetes.io/projected/22cfa26a-0378-40f1-86a5-b31fcf74f219-kube-api-access-9fd7n\") pod \"community-operators-2f46h\" (UID: \"22cfa26a-0378-40f1-86a5-b31fcf74f219\") " pod="openshift-marketplace/community-operators-2f46h" Nov 25 09:55:05 crc kubenswrapper[5043]: I1125 09:55:05.601680 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22cfa26a-0378-40f1-86a5-b31fcf74f219-utilities\") pod \"community-operators-2f46h\" (UID: \"22cfa26a-0378-40f1-86a5-b31fcf74f219\") " pod="openshift-marketplace/community-operators-2f46h" Nov 25 09:55:05 crc kubenswrapper[5043]: I1125 09:55:05.601745 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22cfa26a-0378-40f1-86a5-b31fcf74f219-catalog-content\") pod \"community-operators-2f46h\" (UID: \"22cfa26a-0378-40f1-86a5-b31fcf74f219\") " pod="openshift-marketplace/community-operators-2f46h" Nov 25 09:55:05 crc kubenswrapper[5043]: I1125 09:55:05.601824 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fd7n\" (UniqueName: \"kubernetes.io/projected/22cfa26a-0378-40f1-86a5-b31fcf74f219-kube-api-access-9fd7n\") pod \"community-operators-2f46h\" (UID: \"22cfa26a-0378-40f1-86a5-b31fcf74f219\") " pod="openshift-marketplace/community-operators-2f46h" Nov 25 09:55:05 crc kubenswrapper[5043]: I1125 09:55:05.602397 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22cfa26a-0378-40f1-86a5-b31fcf74f219-utilities\") pod \"community-operators-2f46h\" (UID: \"22cfa26a-0378-40f1-86a5-b31fcf74f219\") " pod="openshift-marketplace/community-operators-2f46h" Nov 25 09:55:05 crc kubenswrapper[5043]: I1125 09:55:05.602399 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22cfa26a-0378-40f1-86a5-b31fcf74f219-catalog-content\") pod \"community-operators-2f46h\" (UID: \"22cfa26a-0378-40f1-86a5-b31fcf74f219\") " pod="openshift-marketplace/community-operators-2f46h" Nov 25 09:55:05 crc kubenswrapper[5043]: I1125 09:55:05.625299 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fd7n\" (UniqueName: \"kubernetes.io/projected/22cfa26a-0378-40f1-86a5-b31fcf74f219-kube-api-access-9fd7n\") pod \"community-operators-2f46h\" (UID: \"22cfa26a-0378-40f1-86a5-b31fcf74f219\") " pod="openshift-marketplace/community-operators-2f46h" Nov 25 09:55:05 crc kubenswrapper[5043]: I1125 09:55:05.714103 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2f46h" Nov 25 09:55:06 crc kubenswrapper[5043]: I1125 09:55:06.186694 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2f46h"] Nov 25 09:55:07 crc kubenswrapper[5043]: I1125 09:55:07.087378 5043 generic.go:334] "Generic (PLEG): container finished" podID="22cfa26a-0378-40f1-86a5-b31fcf74f219" containerID="7fd2ec187299343c92028c5eabe3c707c02cc6710ab49645c8c04d7058e7f800" exitCode=0 Nov 25 09:55:07 crc kubenswrapper[5043]: I1125 09:55:07.087476 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f46h" event={"ID":"22cfa26a-0378-40f1-86a5-b31fcf74f219","Type":"ContainerDied","Data":"7fd2ec187299343c92028c5eabe3c707c02cc6710ab49645c8c04d7058e7f800"} Nov 25 09:55:07 crc kubenswrapper[5043]: I1125 09:55:07.090141 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f46h" event={"ID":"22cfa26a-0378-40f1-86a5-b31fcf74f219","Type":"ContainerStarted","Data":"3869572cf66709b9b0876122c3e50a995ce655a88bfba7866cc94ec994cb130c"} Nov 25 09:55:08 crc kubenswrapper[5043]: I1125 09:55:08.104751 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f46h" event={"ID":"22cfa26a-0378-40f1-86a5-b31fcf74f219","Type":"ContainerStarted","Data":"a848a7c1647ea41d4ae638c7c0e15c7ff5e63dc64e40181aabe48a4057104118"} Nov 25 09:55:10 crc kubenswrapper[5043]: I1125 09:55:10.130894 5043 generic.go:334] "Generic (PLEG): container finished" podID="22cfa26a-0378-40f1-86a5-b31fcf74f219" containerID="a848a7c1647ea41d4ae638c7c0e15c7ff5e63dc64e40181aabe48a4057104118" exitCode=0 Nov 25 09:55:10 crc kubenswrapper[5043]: I1125 09:55:10.130979 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f46h" event={"ID":"22cfa26a-0378-40f1-86a5-b31fcf74f219","Type":"ContainerDied","Data":"a848a7c1647ea41d4ae638c7c0e15c7ff5e63dc64e40181aabe48a4057104118"} Nov 25 09:55:12 crc kubenswrapper[5043]: I1125 09:55:12.155509 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f46h" event={"ID":"22cfa26a-0378-40f1-86a5-b31fcf74f219","Type":"ContainerStarted","Data":"08202b81c11ca191f3f4e7f92c1810bbf0d6edc733bccb737e8db9016be1780f"} Nov 25 09:55:12 crc kubenswrapper[5043]: I1125 09:55:12.182311 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2f46h" podStartSLOduration=3.641476669 podStartE2EDuration="7.182293909s" podCreationTimestamp="2025-11-25 09:55:05 +0000 UTC" firstStartedPulling="2025-11-25 09:55:07.09010399 +0000 UTC m=+9571.258299721" lastFinishedPulling="2025-11-25 09:55:10.63092125 +0000 UTC m=+9574.799116961" observedRunningTime="2025-11-25 09:55:12.180445679 +0000 UTC m=+9576.348641410" watchObservedRunningTime="2025-11-25 09:55:12.182293909 +0000 UTC m=+9576.350489630" Nov 25 09:55:15 crc kubenswrapper[5043]: I1125 09:55:15.714630 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2f46h" Nov 25 09:55:15 crc kubenswrapper[5043]: I1125 09:55:15.715362 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2f46h" Nov 25 09:55:15 crc kubenswrapper[5043]: I1125 09:55:15.769453 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2f46h" Nov 25 09:55:16 crc kubenswrapper[5043]: I1125 09:55:16.251469 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2f46h" Nov 25 09:55:17 crc kubenswrapper[5043]: I1125 09:55:17.006719 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2f46h"] Nov 25 09:55:18 crc kubenswrapper[5043]: I1125 09:55:18.218714 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2f46h" podUID="22cfa26a-0378-40f1-86a5-b31fcf74f219" containerName="registry-server" containerID="cri-o://08202b81c11ca191f3f4e7f92c1810bbf0d6edc733bccb737e8db9016be1780f" gracePeriod=2 Nov 25 09:55:20 crc kubenswrapper[5043]: I1125 09:55:20.248494 5043 generic.go:334] "Generic (PLEG): container finished" podID="22cfa26a-0378-40f1-86a5-b31fcf74f219" containerID="08202b81c11ca191f3f4e7f92c1810bbf0d6edc733bccb737e8db9016be1780f" exitCode=0 Nov 25 09:55:20 crc kubenswrapper[5043]: I1125 09:55:20.248622 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f46h" event={"ID":"22cfa26a-0378-40f1-86a5-b31fcf74f219","Type":"ContainerDied","Data":"08202b81c11ca191f3f4e7f92c1810bbf0d6edc733bccb737e8db9016be1780f"} Nov 25 09:55:20 crc kubenswrapper[5043]: I1125 09:55:20.540573 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2f46h" Nov 25 09:55:20 crc kubenswrapper[5043]: I1125 09:55:20.622801 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22cfa26a-0378-40f1-86a5-b31fcf74f219-utilities\") pod \"22cfa26a-0378-40f1-86a5-b31fcf74f219\" (UID: \"22cfa26a-0378-40f1-86a5-b31fcf74f219\") " Nov 25 09:55:20 crc kubenswrapper[5043]: I1125 09:55:20.623006 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22cfa26a-0378-40f1-86a5-b31fcf74f219-catalog-content\") pod \"22cfa26a-0378-40f1-86a5-b31fcf74f219\" (UID: \"22cfa26a-0378-40f1-86a5-b31fcf74f219\") " Nov 25 09:55:20 crc kubenswrapper[5043]: I1125 09:55:20.623152 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fd7n\" (UniqueName: \"kubernetes.io/projected/22cfa26a-0378-40f1-86a5-b31fcf74f219-kube-api-access-9fd7n\") pod \"22cfa26a-0378-40f1-86a5-b31fcf74f219\" (UID: \"22cfa26a-0378-40f1-86a5-b31fcf74f219\") " Nov 25 09:55:20 crc kubenswrapper[5043]: I1125 09:55:20.623975 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22cfa26a-0378-40f1-86a5-b31fcf74f219-utilities" (OuterVolumeSpecName: "utilities") pod "22cfa26a-0378-40f1-86a5-b31fcf74f219" (UID: "22cfa26a-0378-40f1-86a5-b31fcf74f219"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:55:20 crc kubenswrapper[5043]: I1125 09:55:20.630528 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22cfa26a-0378-40f1-86a5-b31fcf74f219-kube-api-access-9fd7n" (OuterVolumeSpecName: "kube-api-access-9fd7n") pod "22cfa26a-0378-40f1-86a5-b31fcf74f219" (UID: "22cfa26a-0378-40f1-86a5-b31fcf74f219"). InnerVolumeSpecName "kube-api-access-9fd7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:55:20 crc kubenswrapper[5043]: I1125 09:55:20.681302 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22cfa26a-0378-40f1-86a5-b31fcf74f219-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22cfa26a-0378-40f1-86a5-b31fcf74f219" (UID: "22cfa26a-0378-40f1-86a5-b31fcf74f219"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:55:20 crc kubenswrapper[5043]: I1125 09:55:20.725713 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fd7n\" (UniqueName: \"kubernetes.io/projected/22cfa26a-0378-40f1-86a5-b31fcf74f219-kube-api-access-9fd7n\") on node \"crc\" DevicePath \"\"" Nov 25 09:55:20 crc kubenswrapper[5043]: I1125 09:55:20.725751 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22cfa26a-0378-40f1-86a5-b31fcf74f219-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:55:20 crc kubenswrapper[5043]: I1125 09:55:20.725764 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22cfa26a-0378-40f1-86a5-b31fcf74f219-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:55:21 crc kubenswrapper[5043]: I1125 09:55:21.265938 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2f46h" event={"ID":"22cfa26a-0378-40f1-86a5-b31fcf74f219","Type":"ContainerDied","Data":"3869572cf66709b9b0876122c3e50a995ce655a88bfba7866cc94ec994cb130c"} Nov 25 09:55:21 crc kubenswrapper[5043]: I1125 09:55:21.266006 5043 scope.go:117] "RemoveContainer" containerID="08202b81c11ca191f3f4e7f92c1810bbf0d6edc733bccb737e8db9016be1780f" Nov 25 09:55:21 crc kubenswrapper[5043]: I1125 09:55:21.266188 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2f46h" Nov 25 09:55:21 crc kubenswrapper[5043]: I1125 09:55:21.301806 5043 scope.go:117] "RemoveContainer" containerID="a848a7c1647ea41d4ae638c7c0e15c7ff5e63dc64e40181aabe48a4057104118" Nov 25 09:55:21 crc kubenswrapper[5043]: I1125 09:55:21.309734 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2f46h"] Nov 25 09:55:21 crc kubenswrapper[5043]: I1125 09:55:21.325256 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2f46h"] Nov 25 09:55:21 crc kubenswrapper[5043]: I1125 09:55:21.330590 5043 scope.go:117] "RemoveContainer" containerID="7fd2ec187299343c92028c5eabe3c707c02cc6710ab49645c8c04d7058e7f800" Nov 25 09:55:22 crc kubenswrapper[5043]: I1125 09:55:22.976103 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22cfa26a-0378-40f1-86a5-b31fcf74f219" path="/var/lib/kubelet/pods/22cfa26a-0378-40f1-86a5-b31fcf74f219/volumes" Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.094026 5043 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lskhz"] Nov 25 09:55:27 crc kubenswrapper[5043]: E1125 09:55:27.095317 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22cfa26a-0378-40f1-86a5-b31fcf74f219" containerName="extract-utilities" Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.095340 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="22cfa26a-0378-40f1-86a5-b31fcf74f219" containerName="extract-utilities" Nov 25 09:55:27 crc kubenswrapper[5043]: E1125 09:55:27.095373 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22cfa26a-0378-40f1-86a5-b31fcf74f219" containerName="registry-server" Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.095382 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="22cfa26a-0378-40f1-86a5-b31fcf74f219" containerName="registry-server" Nov 25 09:55:27 crc kubenswrapper[5043]: E1125 09:55:27.095398 5043 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22cfa26a-0378-40f1-86a5-b31fcf74f219" containerName="extract-content" Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.095406 5043 state_mem.go:107] "Deleted CPUSet assignment" podUID="22cfa26a-0378-40f1-86a5-b31fcf74f219" containerName="extract-content" Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.095742 5043 memory_manager.go:354] "RemoveStaleState removing state" podUID="22cfa26a-0378-40f1-86a5-b31fcf74f219" containerName="registry-server" Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.097953 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lskhz" Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.130039 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lskhz"] Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.249320 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcd3120c-43a8-4a31-b527-385b48802db4-catalog-content\") pod \"certified-operators-lskhz\" (UID: \"fcd3120c-43a8-4a31-b527-385b48802db4\") " pod="openshift-marketplace/certified-operators-lskhz" Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.249640 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcd3120c-43a8-4a31-b527-385b48802db4-utilities\") pod \"certified-operators-lskhz\" (UID: \"fcd3120c-43a8-4a31-b527-385b48802db4\") " pod="openshift-marketplace/certified-operators-lskhz" Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.249838 5043 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fflb\" (UniqueName: \"kubernetes.io/projected/fcd3120c-43a8-4a31-b527-385b48802db4-kube-api-access-7fflb\") pod \"certified-operators-lskhz\" (UID: \"fcd3120c-43a8-4a31-b527-385b48802db4\") " pod="openshift-marketplace/certified-operators-lskhz" Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.352143 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcd3120c-43a8-4a31-b527-385b48802db4-utilities\") pod \"certified-operators-lskhz\" (UID: \"fcd3120c-43a8-4a31-b527-385b48802db4\") " pod="openshift-marketplace/certified-operators-lskhz" Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.352497 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fflb\" (UniqueName: \"kubernetes.io/projected/fcd3120c-43a8-4a31-b527-385b48802db4-kube-api-access-7fflb\") pod \"certified-operators-lskhz\" (UID: \"fcd3120c-43a8-4a31-b527-385b48802db4\") " pod="openshift-marketplace/certified-operators-lskhz" Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.352660 5043 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcd3120c-43a8-4a31-b527-385b48802db4-catalog-content\") pod \"certified-operators-lskhz\" (UID: \"fcd3120c-43a8-4a31-b527-385b48802db4\") " pod="openshift-marketplace/certified-operators-lskhz" Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.353210 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcd3120c-43a8-4a31-b527-385b48802db4-catalog-content\") pod \"certified-operators-lskhz\" (UID: \"fcd3120c-43a8-4a31-b527-385b48802db4\") " pod="openshift-marketplace/certified-operators-lskhz" Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.353495 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcd3120c-43a8-4a31-b527-385b48802db4-utilities\") pod \"certified-operators-lskhz\" (UID: \"fcd3120c-43a8-4a31-b527-385b48802db4\") " pod="openshift-marketplace/certified-operators-lskhz" Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.376138 5043 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fflb\" (UniqueName: \"kubernetes.io/projected/fcd3120c-43a8-4a31-b527-385b48802db4-kube-api-access-7fflb\") pod \"certified-operators-lskhz\" (UID: \"fcd3120c-43a8-4a31-b527-385b48802db4\") " pod="openshift-marketplace/certified-operators-lskhz" Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.432454 5043 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lskhz" Nov 25 09:55:27 crc kubenswrapper[5043]: I1125 09:55:27.942314 5043 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lskhz"] Nov 25 09:55:28 crc kubenswrapper[5043]: I1125 09:55:28.337658 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lskhz" event={"ID":"fcd3120c-43a8-4a31-b527-385b48802db4","Type":"ContainerDied","Data":"d323dd4f459a924fd5d6f1a9832cdb27422011aa90a77eaedd1530d0b84470b0"} Nov 25 09:55:28 crc kubenswrapper[5043]: I1125 09:55:28.337488 5043 generic.go:334] "Generic (PLEG): container finished" podID="fcd3120c-43a8-4a31-b527-385b48802db4" containerID="d323dd4f459a924fd5d6f1a9832cdb27422011aa90a77eaedd1530d0b84470b0" exitCode=0 Nov 25 09:55:28 crc kubenswrapper[5043]: I1125 09:55:28.339936 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lskhz" event={"ID":"fcd3120c-43a8-4a31-b527-385b48802db4","Type":"ContainerStarted","Data":"113c6ba50157a5170b456fe05cc437afbf6e6b69760d2c43794574ac17b9280f"} Nov 25 09:55:29 crc kubenswrapper[5043]: I1125 09:55:29.361900 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lskhz" event={"ID":"fcd3120c-43a8-4a31-b527-385b48802db4","Type":"ContainerStarted","Data":"7659a8de33f9ea715feb2d7eba97e2d211e0032a2482f964928b8ff12fc5e40f"} Nov 25 09:55:30 crc kubenswrapper[5043]: I1125 09:55:30.373262 5043 generic.go:334] "Generic (PLEG): container finished" podID="fcd3120c-43a8-4a31-b527-385b48802db4" containerID="7659a8de33f9ea715feb2d7eba97e2d211e0032a2482f964928b8ff12fc5e40f" exitCode=0 Nov 25 09:55:30 crc kubenswrapper[5043]: I1125 09:55:30.373331 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lskhz" event={"ID":"fcd3120c-43a8-4a31-b527-385b48802db4","Type":"ContainerDied","Data":"7659a8de33f9ea715feb2d7eba97e2d211e0032a2482f964928b8ff12fc5e40f"} Nov 25 09:55:31 crc kubenswrapper[5043]: I1125 09:55:31.386877 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lskhz" event={"ID":"fcd3120c-43a8-4a31-b527-385b48802db4","Type":"ContainerStarted","Data":"f52be3c3d6fc554999b4671192163d916e8e50f071fbd3535e5dd0cd93160976"} Nov 25 09:55:31 crc kubenswrapper[5043]: I1125 09:55:31.412913 5043 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lskhz" podStartSLOduration=1.9514317719999998 podStartE2EDuration="4.412889567s" podCreationTimestamp="2025-11-25 09:55:27 +0000 UTC" firstStartedPulling="2025-11-25 09:55:28.341072031 +0000 UTC m=+9592.509267752" lastFinishedPulling="2025-11-25 09:55:30.802529816 +0000 UTC m=+9594.970725547" observedRunningTime="2025-11-25 09:55:31.4074372 +0000 UTC m=+9595.575632931" watchObservedRunningTime="2025-11-25 09:55:31.412889567 +0000 UTC m=+9595.581085288" Nov 25 09:55:37 crc kubenswrapper[5043]: I1125 09:55:37.432626 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lskhz" Nov 25 09:55:37 crc kubenswrapper[5043]: I1125 09:55:37.434098 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lskhz" Nov 25 09:55:37 crc kubenswrapper[5043]: I1125 09:55:37.476699 5043 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lskhz" Nov 25 09:55:38 crc kubenswrapper[5043]: I1125 09:55:38.510139 5043 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lskhz" Nov 25 09:55:38 crc kubenswrapper[5043]: I1125 09:55:38.558874 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lskhz"] Nov 25 09:55:40 crc kubenswrapper[5043]: I1125 09:55:40.479441 5043 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lskhz" podUID="fcd3120c-43a8-4a31-b527-385b48802db4" containerName="registry-server" containerID="cri-o://f52be3c3d6fc554999b4671192163d916e8e50f071fbd3535e5dd0cd93160976" gracePeriod=2 Nov 25 09:55:42 crc kubenswrapper[5043]: I1125 09:55:42.504942 5043 generic.go:334] "Generic (PLEG): container finished" podID="fcd3120c-43a8-4a31-b527-385b48802db4" containerID="f52be3c3d6fc554999b4671192163d916e8e50f071fbd3535e5dd0cd93160976" exitCode=0 Nov 25 09:55:42 crc kubenswrapper[5043]: I1125 09:55:42.505043 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lskhz" event={"ID":"fcd3120c-43a8-4a31-b527-385b48802db4","Type":"ContainerDied","Data":"f52be3c3d6fc554999b4671192163d916e8e50f071fbd3535e5dd0cd93160976"} Nov 25 09:55:42 crc kubenswrapper[5043]: I1125 09:55:42.505336 5043 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lskhz" event={"ID":"fcd3120c-43a8-4a31-b527-385b48802db4","Type":"ContainerDied","Data":"113c6ba50157a5170b456fe05cc437afbf6e6b69760d2c43794574ac17b9280f"} Nov 25 09:55:42 crc kubenswrapper[5043]: I1125 09:55:42.505355 5043 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="113c6ba50157a5170b456fe05cc437afbf6e6b69760d2c43794574ac17b9280f" Nov 25 09:55:42 crc kubenswrapper[5043]: I1125 09:55:42.534278 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lskhz" Nov 25 09:55:42 crc kubenswrapper[5043]: I1125 09:55:42.568986 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fflb\" (UniqueName: \"kubernetes.io/projected/fcd3120c-43a8-4a31-b527-385b48802db4-kube-api-access-7fflb\") pod \"fcd3120c-43a8-4a31-b527-385b48802db4\" (UID: \"fcd3120c-43a8-4a31-b527-385b48802db4\") " Nov 25 09:55:42 crc kubenswrapper[5043]: I1125 09:55:42.569197 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcd3120c-43a8-4a31-b527-385b48802db4-catalog-content\") pod \"fcd3120c-43a8-4a31-b527-385b48802db4\" (UID: \"fcd3120c-43a8-4a31-b527-385b48802db4\") " Nov 25 09:55:42 crc kubenswrapper[5043]: I1125 09:55:42.569230 5043 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcd3120c-43a8-4a31-b527-385b48802db4-utilities\") pod \"fcd3120c-43a8-4a31-b527-385b48802db4\" (UID: \"fcd3120c-43a8-4a31-b527-385b48802db4\") " Nov 25 09:55:42 crc kubenswrapper[5043]: I1125 09:55:42.570687 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcd3120c-43a8-4a31-b527-385b48802db4-utilities" (OuterVolumeSpecName: "utilities") pod "fcd3120c-43a8-4a31-b527-385b48802db4" (UID: "fcd3120c-43a8-4a31-b527-385b48802db4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:55:42 crc kubenswrapper[5043]: I1125 09:55:42.573255 5043 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcd3120c-43a8-4a31-b527-385b48802db4-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 09:55:42 crc kubenswrapper[5043]: I1125 09:55:42.580220 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd3120c-43a8-4a31-b527-385b48802db4-kube-api-access-7fflb" (OuterVolumeSpecName: "kube-api-access-7fflb") pod "fcd3120c-43a8-4a31-b527-385b48802db4" (UID: "fcd3120c-43a8-4a31-b527-385b48802db4"). InnerVolumeSpecName "kube-api-access-7fflb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 09:55:42 crc kubenswrapper[5043]: I1125 09:55:42.631222 5043 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcd3120c-43a8-4a31-b527-385b48802db4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fcd3120c-43a8-4a31-b527-385b48802db4" (UID: "fcd3120c-43a8-4a31-b527-385b48802db4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 09:55:42 crc kubenswrapper[5043]: I1125 09:55:42.674428 5043 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fflb\" (UniqueName: \"kubernetes.io/projected/fcd3120c-43a8-4a31-b527-385b48802db4-kube-api-access-7fflb\") on node \"crc\" DevicePath \"\"" Nov 25 09:55:42 crc kubenswrapper[5043]: I1125 09:55:42.674461 5043 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcd3120c-43a8-4a31-b527-385b48802db4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 09:55:43 crc kubenswrapper[5043]: I1125 09:55:43.522125 5043 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lskhz" Nov 25 09:55:43 crc kubenswrapper[5043]: I1125 09:55:43.562105 5043 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lskhz"] Nov 25 09:55:43 crc kubenswrapper[5043]: I1125 09:55:43.570565 5043 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lskhz"] Nov 25 09:55:44 crc kubenswrapper[5043]: I1125 09:55:44.983083 5043 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcd3120c-43a8-4a31-b527-385b48802db4" path="/var/lib/kubelet/pods/fcd3120c-43a8-4a31-b527-385b48802db4/volumes" Nov 25 09:56:10 crc kubenswrapper[5043]: E1125 09:56:10.962740 5043 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Nov 25 09:56:17 crc kubenswrapper[5043]: I1125 09:56:17.276705 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:56:17 crc kubenswrapper[5043]: I1125 09:56:17.277882 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 09:56:47 crc kubenswrapper[5043]: I1125 09:56:47.276154 5043 patch_prober.go:28] interesting pod/machine-config-daemon-jzwnx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 09:56:47 crc kubenswrapper[5043]: I1125 09:56:47.276659 5043 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jzwnx" podUID="707b7a7f-020e-4719-9db9-7d1f3294b25c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"